text
stringlengths
11
320k
source
stringlengths
26
161
Anunorganized machineis a concept mentioned in a 1948 report byAlan Turingtitled "Intelligent Machinery", in which he suggested that the infant humancortexwas what he called an "unorganised machine".[1][2]It remained unpublished until 1969.[3] Turing defined the class of unorganized machines as largely random in their initial construction, but capable of being trained to perform particular tasks. Turing's unorganized machines were in fact very early examples of randomly connected, binaryneural networks, and Turing claimed that these were the simplest possible model of thenervous system. Turing had been interested in the possibility of simulating neural systems for at least the previous two years. In correspondence withWilliam Ross Ashbyin 1946 he writes: I am more interested in the possibility of producing models of the action of the brain than in the applications to practical computing...although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model, within theACE, in which this possibility was allowed for, but in which the actual construction of theACEdid not alter, but only the remembered data In his 1948 paper Turing defined two examples of his unorganized machines. The first wereA-type machines— these being essentially randomly connected networks ofNANDlogic gates. The second were calledB-type machines, which could be created by taking an A-type machine and replacing every inter-node connection with a structure called aconnection modifier— which itself is made from A-type nodes. The purpose of the connection modifiers were to allow the B-type machine to undergo "appropriate interference, mimicking education" in order to organize the behaviour of the network to perform useful work. Before the termgenetic algorithmwas coined, Turing even proposed the use of what he called agenetical searchto configure his unorganized machines.[4]Turing claimed that the behaviour of B-type machines could be very complex when the number of nodes in the network was large, and stated that the "picture of the cortex as an unorganized machine is very satisfactory from the point of view of evolution and genetics".
https://en.wikipedia.org/wiki/Unorganized_machine
Thevon Neumann architecture—also known as thevon Neumann modelorPrinceton architecture—is acomputer architecturebased on theFirst Draft of a Report on the EDVAC,[1]written byJohn von Neumannin 1945, describing designs discussed withJohn MauchlyandJ. Presper Eckertat the University of Pennsylvania'sMoore School of Electrical Engineering. The document describes a design architecture for an electronicdigital computermade of "organs" that were later understood to have these components: The attribution of the invention of the architecture to von Neumann is controversial, not least because Eckert and Mauchly had done a lot of the required design work and claim to have had the idea for stored programs long before discussing the ideas with von Neumann andHerman Goldstine.[3] The term "von Neumann architecture" has evolved to refer to anystored-program computerin which aninstruction fetchand a data operation cannot occur at the same time (since they share a commonbus). This is referred to as thevon Neumann bottleneck, which often limits the performance of the corresponding system.[4] The von Neumann architecture is simpler than theHarvard architecture(which has one dedicated set of address and data buses for reading and writing to memory and another set of address and data buses to fetchinstructions). Astored-program computeruses the same underlying mechanism to encode bothprogram instructionsand data as opposed to designs which use a mechanism such as discreteplugboardwiring or fixed control circuitry for instructionimplementation. Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as theColossusand theENIAC. These were programmed by settingswitchesand insertingpatch cablesto route data and control signals between various functional units. The vast majority of modern computers use the same hardware mechanism to encode and store both data and program instructions, but havecachesbetween the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instruction and data fetches use separate buses (split-cache architecture). The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a deskcalculator(in principle) is a fixed program computer. It can do basicmathematics, but it cannot run aword processoror games. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as "designed" for a particular task. "Reprogramming"—when possible at all—was a laborious process that started withflowchartsand paper notes, followed by detailed engineering designs, and then the often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up and debug a program onENIAC.[5] With the proposal of the stored-program computer, this changed. A stored-program computer includes, by design, aninstruction set, and can store in memory a set of instructions (aprogram) that details thecomputation. A stored-program design also allows forself-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which operators had to do manually in early designs. This became less important whenindex registersandindirect addressingbecame usual features of machine architecture. Another use was to embed frequently used data in the instruction stream usingimmediate addressing. When von Neumann described the automatic computing systems using different terminology than is typically described with the model. In theFirst Draft of a Report on the EDVAC,[1]the architecture was composed of "a high-speed memory M, a central arithmetic unit CA, an outside recording medium R, an input organ I, an output organ O, and a central control CC"[6] On a large scale, the ability to treat instructions as data is what makesassemblers,compilers,linkers,loaders, and other automated programming tools possible. It makes "programs that write programs" possible.[7]This has made a sophisticated self-hosting computing ecosystem flourish around von Neumann architecture machines. Somehigh-level languagesleverage the von Neumann architecture by providing an abstract, machine-independent way to manipulateexecutable codeat runtime (e.g.,LISP), or by using runtime information to tunejust-in-time compilation(e.g. languages hosted on theJava virtual machine, or languages embedded inweb browsers). On a smaller scale, some repetitive operations such asBITBLTorpixel and vertex shaderscan be accelerated on general purpose processors with just-in-time compilation techniques. This is one use of self-modifying code that has remained popular. The mathematicianAlan Turing, who had been alerted to a problem of mathematical logic by the lectures ofMax Newmanat theUniversity of Cambridge, wrote a paper in 1936 entitledOn Computable Numbers, with an Application to theEntscheidungsproblem, which was published in theProceedings of the London Mathematical Society.[8]In it he described a hypothetical machine he called auniversal computing machine, now known as the "Universal Turing machine". The hypothetical machine had an infinite store (memory in today's terminology) that contained both instructions and data.John von Neumannbecame acquainted with Turing while he was a visiting professor at Cambridge in 1935, and also during Turing's PhD year at theInstitute for Advanced StudyinPrinceton, New Jerseyduring 1936–1937. Whether he knew of Turing's paper of 1936 at that time is not clear. In 1936,Konrad Zusealso anticipated, in two patent applications, that machine instructions could be stored in the same storage used for data.[9] Independently,J. Presper EckertandJohn Mauchly, who were developing theENIACat theMoore School of Electrical Engineeringof theUniversity of Pennsylvania, wrote about the stored-program concept in December 1943.[10][11]In planning a new machine,EDVAC, Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device, a mercury metaldelay-line memory. This was the first time the construction of a practical stored-program machine was proposed. At that time, he and Mauchly were not aware of Turing's work. Von Neumann was involved in theManhattan Projectat theLos Alamos National Laboratory. It required huge amounts of calculation, and thus drew him to the ENIAC project, during the summer of 1944. There he joined the ongoing discussions on the design of this stored-program computer, the EDVAC. As part of that group, he wrote up a description titledFirst Draft of a Report on the EDVAC[1]based on the work of Eckert and Mauchly. It was unfinished when his colleague Herman Goldstine circulated it, and bore only von Neumann's name (to the consternation of Eckert and Mauchly).[12]The paper was read by dozens of von Neumann's colleagues in America and Europe, and influenced[vague]the next round of computer designs. Jack Copelandconsiders that it is "historically inappropriate to refer to electronic stored-program digital computers as 'von Neumann machines'".[13]His Los Alamos colleagueStan Frankelsaid of von Neumann's regard for Turing's ideas[14] I know that in or about 1943 or '44 von Neumann was well aware of the fundamental importance of Turing's paper of 1936.... Von Neumann introduced me to that paper and at his urging I studied it with care. Many people have acclaimed von Neumann as the "father of the computer" (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—in so far as not anticipated by Babbage.... Both Turing and von Neumann, of course, also made substantial contributions to the "reduction to practice" of these concepts but I would not regard these as comparable in importance with the introduction and explication of the concept of a computer able to store in its memory its program of activities and of modifying that program in the course of these activities. At the time that the "First Draft" report was circulated, Turing was producing a report entitledProposed Electronic Calculator. It described in engineering and programming detail, his idea of a machine he called theAutomatic Computing Engine (ACE).[15]He presented this to the executive committee of the BritishNational Physical Laboratoryon February 19, 1946. Although Turing knew from his wartime experience at Bletchley Park that what he proposed was feasible, the secrecy surroundingColossus, that was subsequently maintained for several decades, prevented him from saying so. Various successful implementations of the ACE design were produced. Both von Neumann's and Turing's papers described stored-program computers, but von Neumann's earlier paper achieved greater circulation and the computer architecture it outlined became known as the "von Neumann architecture". In the 1953 publicationFaster than Thought: A Symposium on Digital Computing Machines(edited by B. V. Bowden), a section in the chapter onComputers in Americareads as follows:[16] The Machine of the Institute For Advanced Study, Princeton In 1945, Professor J. von Neumann, who was then working at the Moore School of Engineering in Philadelphia, where the E.N.I.A.C. had been built, issued on behalf of a group of his co-workers, a report on the logical design of digital computers. The report contained a detailed proposal for the design of the machine that has since become known as the E.D.V.A.C. (electronic discrete variable automatic computer). This machine has only recently been completed in America, but the von Neumann report inspired the construction of the E.D.S.A.C. (electronic delay-storage automatic calculator) in Cambridge (see p. 130). In 1947, Burks, Goldstine and von Neumann published another report that outlined the design of another type of machine (a parallel machine this time) that would be exceedingly fast, capable perhaps of 20,000 operations per second. They pointed out that the outstanding problem in constructing such a machine was the development of suitable memory with instantaneously accessible contents. At first they suggested using a specialvacuum tube—called the "Selectron"—which the Princeton Laboratories of RCA had invented. These tubes were expensive and difficult to make, so von Neumann subsequently decided to build a machine based on theWilliams memory. This machine—completed in June, 1952 in Princeton—has become popularly known as the Maniac. The design of this machine inspired at least half a dozen machines now being built in America, all known affectionately as "Johniacs". In the same book, the first two paragraphs of a chapter on ACE read as follows:[17] Automatic Computation at the National Physical Laboratory One of the most modern digital computers which embodies developments and improvements in the technique of automatic electronic computing was recently demonstrated at the National Physical Laboratory, Teddington, where it has been designed and built by a small team of mathematicians and electronics research engineers on the staff of the Laboratory, assisted by a number of production engineers from the English Electric Company, Limited. The equipment so far erected at the Laboratory is only the pilot model of a much larger installation which will be known as the Automatic Computing Engine, but although comparatively small in bulk and containing only about 800 thermionic valves, as can be judged from Plates XII, XIII and XIV, it is an extremely rapid and versatile calculating machine. The basic concepts and abstract principles of computation by a machine were formulated by Dr. A. M. Turing, F.R.S., in a paper1. read before the London Mathematical Society in 1936, but work on such machines in Britain was delayed by the war. In 1945, however, an examination of the problems was made at the National Physical Laboratory by Mr. J. R. Womersley, then superintendent of the Mathematics Division of the Laboratory. He was joined by Dr. Turing and a small staff of specialists, and, by 1947, the preliminary planning was sufficiently advanced to warrant the establishment of the special group already mentioned. In April, 1948, the latter became the Electronics Section of the Laboratory, under the charge of Mr. F. M. Colebrook. TheFirst Draftdescribed a design that was used by many universities and corporations to construct their computers.[18]Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets. The date information in the following chronology is difficult to put into proper order. Some dates are for first running a test program, some dates are the first time the computer was demonstrated or completed, and some dates are for the first delivery or installation. Through the decades of the 1960s and 1970s computers generally became both smaller and faster, which led to evolutions in their architecture. For example,memory-mapped I/Olets input and outputdevicesbe treated the same as memory.[26]A singlesystem buscould be used to provide a modular system with lower cost[clarification needed]. This is sometimes called a "streamlining" of the architecture.[27]In subsequent decades, simplemicrocontrollerswould sometimes omit features of the model to lower cost and size. Larger computers added features for higher performance. The use of the same bus to fetch instructions and data leads to thevon Neumann bottleneck, the limitedthroughput(data transfer rate) between thecentral processing unit(CPU) and memory compared to the amount of memory. Because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continuallyforced to waitfor needed data to move to or from memory. Since CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every new generation of CPU. The von Neumann bottleneck was described byJohn Backusin his 1977 ACMTuring Awardlecture. According to Backus: Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers ofwordsback and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.[28][29] There are several known methods for mitigating the Von Neumann performance bottleneck. For example, the following all can improve performance[why?]: The problem can also be sidestepped somewhat by usingparallel computing, using for example thenon-uniform memory access(NUMA) architecture—this approach is commonly employed bysupercomputers. It is less clear whether theintellectual bottleneckthat Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence.[citation needed]Modernfunctional programmingandobject-oriented programmingare much less geared towards "pushing vast numbers of words back and forth"[how?]than earlier languages likeFORTRANwere, but internally, that is still what computers spend much of their time doing, even highly parallel supercomputers.[citation needed] Aside from the von Neumann bottleneck, program modifications can be quite harmful, either by accident or design.[citation needed]In some simple stored-program computer designs[which?], a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a computer crash. However, this problem also applies to conventional programs that lackbounds checking.Memory protectionand various access controls generally safeguard against both accidental and malicious program changes.
https://en.wikipedia.org/wiki/Von_Neumann_architecture
Acold caseis acrime, or a suspected crime, that has not yet been fully resolved and is not the subject of a currentcriminal investigation, but for which new information could emerge from new witness testimony, re-examined archives, new or retained material evidence, or fresh activities of a suspect. New technological methods developed after the crime was committed can be used on the surviving evidence for analysis often with conclusive results. Typically, cold cases areviolentand other majorfelonycrimes, such asmurderandrape, which—unlike unsolved minor crimes—are generally not subject to astatute of limitations. Sometimes disappearances can also be considered cold cases if the victim has not been seen or heard from for some time, such as the case ofNatalee Hollowayor theBeaumont children. The rate of cold cases being solved are slowly declining, soon less than 30% will be solved per year. About 35% of those cases are not cold cases at all. Some cases become instantly cold when a seemingly closed (solved) case is re-opened due to the discovery of new evidence pointing away from the original suspect(s). Other cases are cold when the crime is discovered well after the fact—for example, by the discovery of human remains.[1]Some cases become classified cold cases when a case that had been originally ruled an accident or suicide is re-designated as murder when new evidence emerges. TheJohn Christiemurders is a notable case whenTimothy Evanswas wrongly executed for the alleged murders of his wife and child. Many other bodies were later found in the house where they lived with Christie, and he was then executed for the crimes. The case helped a campaign againstcapital punishmentin Britain. A case is considered unsolved until asuspecthas been identified,charged, andtriedfor the crime. A case that goes to trial and does not result in aconvictioncan also be kept on the books pending newevidence. In some cases, a suspect, often called a "person of interest" or "subject" is identified early on but no evidence definitively linking the subject to the crime is found at that time and more often than not the subject is not forthcoming with a confession. This often happens in cases where the subject has analibi, alibi witnesses, or lack of forensic evidence. Eventually, the alibi is disproved, the witnesses recanted their statements or advances in forensics helped bring the subjects to justice. Sometimes a case is not solved but forensic evidence helps to determine that the crimes areserial crimes. TheBTKcase andOriginal Night Stalkercases are such examples.[2]TheTexas Rangershave established a website[3]in the hopes that it shall elicit new information and investigative leads.[4] Sometimes, a viable suspect has been overlooked or simply ignored due to then-flimsy circumstantial evidence, the presence of a likelier suspect (who is later proven to be innocent), or a tendency of investigators to zero in on someone else to the exclusion of other possibilities (which goes back to the likelier suspect angle)—known as "tunnel vision". With the advent of and improvements toDNA testing/DNA profilingand otherforensicstechnology, many cold cases are being re-opened andprosecuted.[5]Policedepartments are opening cold case units whose job is to re-examine cold case files. DNA evidence helps in such cases but as in the case of fingerprints, it is of no value unless there is evidence on file to compare it to. However, to combat that issue, the FBI is switching from using theIntegrated Automated Fingerprint Identification System (IAFIS)to using a newer technology called theNext Generation Identification (NGI). Other improvements in forensics lie in fields such as: The identity ofJack the Ripperis a notorious example of an outstanding cold case, with numeroussuggestionsas to the identity of theserial killer. Similarly, theZodiac Killerhas been studied extensively for almost 50 years, with numerous suspects discussed and debated. The perpetrators of theWall Street bombingof 1920 have never been positively identified, though theGalleanists, a group ofItaliananarchists, are widely believed to have planned the explosion. Theburning of theReichstag buildingin 1933 remains controversial and althoughMarinus van der Lubbewas tried, convicted and executed forarson, it is possible that the Reichstag fire was perpetrated by theNazisto enhance their power and destroy democracy in Germany. The phrase "Cold Case" is found in a number of story and book titles. Examples include:
https://en.wikipedia.org/wiki/Cold_case
Hypothetical technologiesare technologies that do not exist yet, but that could exist in the future.[1]They are distinct fromemerging technologies, which have achieved some developmental success. Emerging technologies as of 2018 include 3-D metal printing and artificial embryos.[2]Many hypothetical technologies have been the subject ofscience fiction. The criteria for this list are that the technology:
https://en.wikipedia.org/wiki/List_of_hypothetical_technologies
Incomputability theory, anundecidable problemis adecision problemfor which aneffective method(algorithm) to derive the correct answer does not exist. More formally, an undecidable problem is a problem whose language is not arecursive set; see the articleDecidable language. There areuncountablymany undecidable problems, so the list below is necessarily incomplete. Though undecidable languages are not recursive languages, they may besubsetsofTuringrecognizable languages: i.e., such undecidable languages may be recursively enumerable. Many, if not most, undecidable problems in mathematics can be posed asword problems: determining when two distinct strings of symbols (encoding some mathematical concept or object) represent the same object or not. For undecidability in axiomatic mathematics, seeList of statements undecidable in ZFC.
https://en.wikipedia.org/wiki/List_of_undecidable_problems
Thislist of unsolved deathsincludes notable cases where: Cases where there are unofficial alternative theories about deaths – the most common theory being that the death was a homicide – can be found under:Death conspiracy theories. Media related toUnsolved deathsat Wikimedia Commons
https://en.wikipedia.org/wiki/List_of_unsolved_deaths
Incomputability theoryandcomputational complexity theory, amany-one reduction(also calledmapping reduction[1]) is areductionthat converts instances of onedecision problem(whether an instance is inL1{\displaystyle L_{1}}) to another decision problem (whether an instance is inL2{\displaystyle L_{2}}) using acomputable function. The reduced instance is in the languageL2{\displaystyle L_{2}}if and only if the initial instance is in its languageL1{\displaystyle L_{1}}. Thus if we can decide whetherL2{\displaystyle L_{2}}instances are in the languageL2{\displaystyle L_{2}}, we can decide whetherL1{\displaystyle L_{1}}instances are in the languageL1{\displaystyle L_{1}}by applying the reduction and solving forL2{\displaystyle L_{2}}. Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said thatL1{\displaystyle L_{1}}reduces toL2{\displaystyle L_{2}}if, in layman's termsL2{\displaystyle L_{2}}is at least as hard to solve asL1{\displaystyle L_{1}}. This means that any algorithm that solvesL2{\displaystyle L_{2}}can also be used as part of a (otherwise relatively simple) program that solvesL1{\displaystyle L_{1}}. Many-one reductions are a special case and stronger form ofTuring reductions.[1]With many-one reductions, the oracle (that is, our solution forL2{\displaystyle L_{2}}) can be invoked only once at the end, and the answer cannot be modified. This means that if we want to show that problemL1{\displaystyle L_{1}}can be reduced to problemL2{\displaystyle L_{2}}, we can use our solution forL2{\displaystyle L_{2}}only once in our solution forL1{\displaystyle L_{1}}, unlike in Turing reductions, where we can use our solution forL2{\displaystyle L_{2}}as many times as needed in order to solve the membership problem for the given instance ofL1{\displaystyle L_{1}}. Many-one reductions were first used byEmil Postin a paper published in 1944.[2]LaterNorman Shapiroused the same concept in 1956 under the namestrong reducibility.[3] SupposeA{\displaystyle A}andB{\displaystyle B}areformal languagesover thealphabetsΣ{\displaystyle \Sigma }andΓ{\displaystyle \Gamma }, respectively. Amany-one reductionfromA{\displaystyle A}toB{\displaystyle B}is atotal computable functionf:Σ∗→Γ∗{\displaystyle f:\Sigma ^{*}\rightarrow \Gamma ^{*}}that has the property that each wordw{\displaystyle w}is inA{\displaystyle A}if and only iff(w){\displaystyle f(w)}is inB{\displaystyle B}. If such a functionf{\displaystyle f}exists, one says thatA{\displaystyle A}ismany-one reducibleorm-reducibletoB{\displaystyle B}and writes Given two setsA,B⊆N{\displaystyle A,B\subseteq \mathbb {N} }one saysA{\displaystyle A}ismany-one reducibletoB{\displaystyle B}and writes if there exists atotal computable functionf{\displaystyle f}withx∈A{\displaystyle x\in A}ifff(x)∈B{\displaystyle f(x)\in B}. If the many-one reductionf{\displaystyle f}isinjective, one speaks of a one-one reduction and writesA≤1B{\displaystyle A\leq _{1}B}. If the one-one reductionf{\displaystyle f}issurjective, one saysA{\displaystyle A}isrecursively isomorphictoB{\displaystyle B}and writes[4]p.324 If bothA≤mB{\displaystyle A\leq _{\mathrm {m} }B}andB≤mA{\displaystyle B\leq _{\mathrm {m} }A}, one saysA{\displaystyle A}ismany-one equivalentorm-equivalenttoB{\displaystyle B}and writes A setB{\displaystyle B}is calledmany-one complete, or simplym-complete,iffB{\displaystyle B}is recursively enumerable and every recursively enumerable setA{\displaystyle A}is m-reducible toB{\displaystyle B}. The relation≡m{\displaystyle \equiv _{m}}indeed is anequivalence, itsequivalence classesare called m-degrees and form a posetDm{\displaystyle {\mathcal {D}}_{m}}with the order induced by≤m{\displaystyle \leq _{m}}.[4]p.257 Some properties of the m-degrees, some of which differ from analogous properties ofTuring degrees:[4]pp.555--581 There is a characterization ofDm{\displaystyle {\mathcal {D}}_{m}}as the unique poset satisfying several explicit properties of itsideals, a similar characterization has eluded the Turing degrees.[4]pp.574--575 Myhill's isomorphism theoremcan be stated as follows: "For all setsA,B{\displaystyle A,B}of natural numbers,A≡B⟺A≡1B{\displaystyle A\equiv B\iff A\equiv _{1}B}." As a corollary,≡{\displaystyle \equiv }and≡1{\displaystyle \equiv _{1}}have the same equivalence classes.[4]p.325The equivalences classes of≡1{\displaystyle \equiv _{1}}are called the1-degrees. Many-one reductions are often subjected to resource restrictions, for example that the reduction function is computable in polynomial time, logarithmic space, byAC0{\displaystyle AC_{0}}orNC0{\displaystyle NC_{0}}circuits, or polylogarithmic projections where each subsequent reduction notion is weaker than the prior; seepolynomial-time reductionandlog-space reductionfor details. Given decision problemsA{\displaystyle A}andB{\displaystyle B}and analgorithmNthat solves instances ofB{\displaystyle B}, we can use a many-one reduction fromA{\displaystyle A}toB{\displaystyle B}to solve instances ofA{\displaystyle A}in: We say that a classCof languages (or a subset of thepower setof the natural numbers) isclosed under many-one reducibilityif there exists no reduction from a language outsideCto a language inC. If a class is closed under many-one reducibility, then many-one reduction can be used to show that a problem is inCby reducing it to a problem inC. Many-one reductions are valuable because most well-studied complexity classes are closed under some type of many-one reducibility, includingP,NP,L,NL,co-NP,PSPACE,EXP, and many others. It is known for example that the first four listed are closed up to the very weak reduction notion of polylogarithmic time projections. These classes are not closed under arbitrary many-one reductions, however. One may also ask about generalized cases of many-one reduction. One such example ise-reduction, where we considerf:A→B{\displaystyle f:A\to B}that are recursively enumerable instead of restricting to recursivef{\displaystyle f}. The resulting reducibility relation is denoted≤e{\displaystyle \leq _{e}}, and its poset has been studied in a similar vein to that of the Turing degrees. For example, there is a jump set0e′{\displaystyle {\boldsymbol {0}}_{e}^{'}}fore-degrees. Thee-degrees do admit some properties differing from those of the poset of Turing degrees, e.g. an embedding of the diamond graph into the degrees below′e{\displaystyle {\boldsymbol {'}}_{e}}.[5] Apolynomial-timemany-one reduction from a problemAto a problemB(both of which are usually required to bedecision problems) is a polynomial-time algorithm for transforming inputs to problemAinto inputs to problemB, such that the transformed problem has the same output as the original problem. An instancexof problemAcan be solved by applying this transformation to produce an instanceyof problemB, givingyas the input to an algorithm for problemB, and returning its output. Polynomial-time many-one reductions may also be known aspolynomial transformationsorKarp reductions, named afterRichard Karp. A reduction of this type is denoted byA≤mPB{\displaystyle A\leq _{m}^{P}B}orA≤pB{\displaystyle A\leq _{p}B}.[6][7]
https://en.wikipedia.org/wiki/Many-one_reduction
Incomputational complexity theoryandgame complexity, aparsimonious reductionis a transformation from one problem to another (areduction) that preserves the number of solutions. Informally, it is abijectionbetween the respective sets of solutions of two problems. A general reduction from problemA{\displaystyle A}to problemB{\displaystyle B}is a transformation that guarantees that wheneverA{\displaystyle A}has a solutionB{\displaystyle B}also hasat least onesolution and vice versa. A parsimonious reduction guarantees that for every solution ofA{\displaystyle A}, there existsa unique solutionofB{\displaystyle B}and vice versa. Parsimonious reductions are commonly used in computational complexity for proving the hardness ofcounting problems, for counting complexity classes such as#P. Additionally, they are used in game complexity, as a way to design hard puzzles that have a unique solution, as many types of puzzles require. Letx{\displaystyle x}be an instance of problemX{\displaystyle X}. AParsimonious reductionR{\displaystyle R}from problemX{\displaystyle X}to problemY{\displaystyle Y}is a reduction such that the number of solutions tox{\displaystyle x}is equal to the number of solutions to problemR(x){\displaystyle R(x)}.[1]If such a reduction exists, and if we have an oracle that counts the number of solutions toR(x){\displaystyle R(x)}which is an instance ofY{\displaystyle Y}, then we can design an algorithm that counts the number of solutions tox{\displaystyle x}, the corresponding instance ofX{\displaystyle X}. Consequently, if counting the number of solutions to the instances ofX{\displaystyle X}is hard, then counting the number of solutions toY{\displaystyle Y}must be hard as well. Just asmany-one reductionsare important for provingNP-completeness, parsimonious reductions are important for proving completeness forcounting complexity classessuch as#P.[1]Because parsimonious reductions preserve the property of having a unique solution, they are also used ingame complexity, to show the hardness of puzzles such assudokuwhere the uniqueness of the solution is an important part of the definition of the puzzle.[2] Specific types of parsimonious reductions may be defined by the computational complexity or other properties of the transformation algorithm. For instance, apolynomial-time parsimonious reductionis one in which the transformation algorithm takespolynomial time. These are the types of reduction used to prove#P-Completeness.[1]Inparameterized complexity,FPTparsimonious reductionsare used; these are parsimonious reductions whose transformation is a fixed-parameter tractable algorithm and that map bounded parameter values to bounded parameter values by a computable function.[3] Polynomial-time parsimonious reductions are a special case of a more general class of reductions for counting problems, thepolynomial-time counting reductions.[4] One common technique used in proving that a reductionR{\displaystyle R}is parsimonious is to show that there is a bijection between the set of solutions tox{\displaystyle x}and the set of solutions toR(x){\displaystyle R(x)}which guarantees that the number of solutions to both problems is the same. The class #P contains the counting versions of NP decision problems. Given an instancex{\displaystyle x}of an NP decision problemX,{\displaystyle X,}the problem#x{\displaystyle \#x}asks for the number of solutions to problemx.{\displaystyle x.}The examples of#P-completenessbelow rely on the fact that #SAT is #P-complete. This is the counting version of3SAT. One can show that any boolean formulacan be rewrittenas a formula in 3-CNFform. Any valid assignment of a boolean formula is a valid assignment of the corresponding 3-CNF formula, and vice versa. Hence, this reduction preserves the number of satisfying assignments, and is a parsimonious reduction. Then, #SAT and #3SAT are counting equivalents, and #3SAT is #P-complete as well. This is the counting version of Planar 3SAT. The hardness reduction from 3SAT to Planar 3SAT given by Lichtenstein[5]has the additional property that for every valid assignment of an instance of 3SAT, there is a unique valid assignment of the corresponding instance of Planar 3SAT, and vice versa. Hence the reduction is parsimonious, and consequently Planar #3SAT is #P-complete. The counting version ofthisproblem asks for the number of Hamiltonian cycles in a givendirected graph. Seta Takahiro provided a reduction[6]from 3SAT to this problem when restricted to planar directed max degree-3 graphs. The reduction provides a bijection between the solutions to an instance of 3SAT and the solutions to an instance of Hamiltonian Cycle in planar directed max degree-3 graphs. Hence the reduction is parsimonious and Hamiltonian Cycle in planar directed max degree-3 graphs is #P-complete. Consequently, the general version of Hamiltonian Cycle problem must be #P-complete as well. Shakashakais an example of how parsimonious reduction could be used in showing hardness of logic puzzles. The decision version of this problem asks whether there is a solution to a given instance of the puzzle. The counting version asks for the number of distinct solutions to such a problem. The reduction from Planar 3SAT given by Demaine, Okamoto, Uehara and Uno[7]also provides a bijection between the set of solutions to an instance of Planar 3SAT and the set of solutions to the corresponding instance of Shakashaka. Hence the reduction is parsimonious, and the counting version of Shakashaka is #P-complete.
https://en.wikipedia.org/wiki/Parsimonious_reduction
Incomputability theory, manyreducibility relations(also calledreductions,reducibilities, andnotions of reducibility) are studied. They are motivated by the question: given setsA{\displaystyle A}andB{\displaystyle B}of natural numbers, is it possible to effectively convert a method for deciding membership inB{\displaystyle B}into a method for deciding membership inA{\displaystyle A}? If the answer to this question is affirmative thenA{\displaystyle A}is said to bereducible toB{\displaystyle B}. The study of reducibility notions is motivated by the study ofdecision problems. For many notions of reducibility, if anynoncomputableset is reducible to a setA{\displaystyle A}thenA{\displaystyle A}must also be noncomputable. This gives a powerful technique for proving that many sets are noncomputable. Areducibility relationis a binary relation on sets of natural numbers that is These two properties imply that reducibility is apreorderon the powerset of the natural numbers. Not all preorders are studied as reducibility notions, however. The notions studied in computability theory have the informal property thatA{\displaystyle A}is reducible toB{\displaystyle B}if and only if any (possibly noneffective) decision procedure forB{\displaystyle B}can be effectively converted to a decision procedure forA{\displaystyle A}. The different reducibility relations vary in the methods they permit such a conversion process to use. Every reducibility relation (in fact, every preorder) induces an equivalence relation on the powerset of the natural numbers in which two sets are equivalent if and only if each one is reducible to the other. In computability theory, these equivalence classes are called thedegreesof the reducibility relation. For example, the Turing degrees are the equivalence classes of sets of naturals induced by Turing reducibility. The degrees of any reducibility relation arepartially orderedby the relation in the following manner. Let≤{\displaystyle \leq }be a reducibility relation and letC{\displaystyle C}andD{\displaystyle D}be two of its degrees. ThenC≤D{\displaystyle C\leq D}if and only if there is a setA{\displaystyle A}inC{\displaystyle C}and a setB{\displaystyle B}inD{\displaystyle D}such thatA≤B{\displaystyle A\leq B}. This is equivalent to the property that for every setA{\displaystyle A}inC{\displaystyle C}and every setB{\displaystyle B}inD{\displaystyle D},A≤B{\displaystyle A\leq B}, because any two sets inCare equivalent and any two sets inD{\displaystyle D}are equivalent. It is common, as shown here, to use boldface notation to denote degrees. The most fundamental reducibility notion isTuring reducibility. A setA{\displaystyle A}of natural numbers isTuring reducibleto a setB{\displaystyle B}if and only if there is anoracle Turing machinethat, when run withB{\displaystyle B}as its oracle set, will compute theindicator function(characteristic function) ofA{\displaystyle A}. Equivalently,A{\displaystyle A}is Turing reducible toB{\displaystyle B}if and only if there is an algorithm for computing the indicator function forA{\displaystyle A}provided that the algorithm is provided with a means to correctly answer questions of the form "Isn{\displaystyle n}inB{\displaystyle B}?". Turing reducibility serves as a dividing line for other reducibility notions because, according to theChurch-Turing thesis, it is the most general reducibility relation that is effective. Reducibility relations that imply Turing reducibility have come to be known asstrong reducibilities, while those that are implied by Turing reducibility areweak reducibilities.Equivalently, a strong reducibility relation is one whose degrees form a finer equivalence relation than the Turing degrees, while a weak reducibility relation is one whose degrees form a coarser equivalence relation than Turing equivalence. The strong reducibilities include Many of these were introduced by Post (1944). Post was searching for a non-computable,computably enumerableset which thehalting problemcould not be Turing reduced to. As he could not construct such a set in 1944, he instead worked on the analogous problems for the various reducibilities that he introduced. These reducibilities have since been the subject of much research, and many relationships between them are known. Aboundedform of each of the above strong reducibilities can be defined. The most famous of these is bounded truth-table reduction, but there are also bounded Turing, bounded weak truth-table, and others. These first three are the most common ones and they are based on the number of queries. For example, a setA{\displaystyle A}is bounded truth-table reducible toB{\displaystyle B}if and only if the Turing machineM{\displaystyle M}computingA{\displaystyle A}relative toB{\displaystyle B}computes a list of up ton{\displaystyle n}numbers, queriesB{\displaystyle B}on these numbers and then terminates for all possible oracle answers; the valuen{\displaystyle n}is a constant independent ofx{\displaystyle x}. The difference between bounded weak truth-table and bounded Turing reduction is that in the first case, the up ton{\displaystyle n}queries have to be made at the same time while in the second case, the queries can be made one after the other. For that reason, there are cases whereA{\displaystyle A}is bounded Turing reducible toB{\displaystyle B}but not weak truth-table reducible toB{\displaystyle B}. The strong reductions listed above restrict the manner in which oracle information can be accessed by a decision procedure but do not otherwise limit the computational resources available. Thus if a setA{\displaystyle A}isdecidablethenA{\displaystyle A}is reducible to any setB{\displaystyle B}under any of the strong reducibility relations listed above, even ifA{\displaystyle A}is not polynomial-time or exponential-time decidable. This is acceptable in the study of computability theory, which is interested in theoretical computability, but it is not reasonable forcomputational complexity theory, which studies which sets can be decided under certain asymptotical resource bounds. The most common reducibility in computational complexity theory ispolynomial-time reducibility; a setAis polynomial-time reducible to a setB{\displaystyle B}if there is a polynomial-time functionfsuch that for everyn{\displaystyle n},n{\displaystyle n}is inA{\displaystyle A}if and only iff(n){\displaystyle f(n)}is inB{\displaystyle B}. This reducibility is, essentially, a resource-bounded version of many-one reducibility. Other resource-bounded reducibilities are used in other contexts of computational complexity theory where other resource bounds are of interest. Although Turing reducibility is the most general reducibility that is effective, weaker reducibility relations are commonly studied. These reducibilities are related to the relative definability of sets over arithmetic or set theory. They include:
https://en.wikipedia.org/wiki/Reduction_(recursion_theory)
Incomputability theoryatruth-table reductionis a type ofreductionfrom adecision problemA{\displaystyle A}to a decision problemB{\displaystyle B}. To solve a problem inA{\displaystyle A}, the reduction describes the answer toA{\displaystyle A}as aboolean formulaortruth tableof some finite number of queries toB{\displaystyle B}. Truth-table reductions are related toTuring reductions, and strictly weaker. (That is, not every Turing reduction between sets can be performed by a truth-table reduction, but every truth-table reduction can be performed by a Turing reduction.) A Turing reduction from a setBto a setAcomputes the membership of a single element inBby asking questions about the membership of various elements inAduring the computation; it may adaptively determine which questions it asks based upon answers to previous questions. In contrast, a truth-table reduction or a weak truth-table reduction must present all of its (finitely many)oraclequeries at the same time. In a truth-table reduction, the reduction also gives aboolean formula(a truth table) that, when given the answers to the queries, will produce the final answer of the reduction. Truth-table reductions appear in a paper byEmil Postpublished in 1944.[1] Aweak truth-table reductionis one where the reduction uses the oracle answers as a basis for further computation, which may depend on the given answers but may not ask further questions of the oracle. It is so named because it weakens the constraints placed on a truth-table reduction, and provides a weaker equivalence classification; as such, a "weak truth-table reduction" can actually be more powerful than a truth-table reduction as a "tool", and perform a reduction that is not performable by truth table. Equivalently, a weak truth-table reduction is a Turing reduction for which theuseof the reduction is bounded by acomputable function. For this reason, they are sometimes referred to asbounded Turing(bT) reductions rather than as weak truth-table (wtt) reductions. As every truth-table reduction is a Turing reduction, ifAis truth-table reducible toB(A≤ttB), thenAis also Turing reducible toB(A≤TB). Considering also one-one reducibility, many-one reducibility and weak truth-table reducibility, or in other words, one-one reducibility implies many-one reducibility, which implies truth-table reducibility, which in turn implies weak truth-table reducibility, which in turn implies Turing reducibility. Furthermore,Ais truth-table reducible toBif and only ifAis Turing reducible toBvia a total functional on2ω{\displaystyle 2^{\omega }}. The forward direction is trivial and for the reverse direction supposeΓ{\displaystyle \Gamma }is a total computable functional. To build the truth-table for computingA(n) simply search for a numbermsuch that for all binary stringsσ{\displaystyle \sigma }of lengthmΓσ(n){\displaystyle \Gamma ^{\sigma }(n)}converges. Such anmmust exist byKőnig's lemmasinceΓ{\displaystyle \Gamma }must be total on all paths through2<ω{\displaystyle 2^{<\omega }}. Given such anmit is a simple matter to find the unique truth-table that givesΓσ(n){\displaystyle \Gamma ^{\sigma }(n)}when applied toσ{\displaystyle \sigma }. The forward direction fails for weak truth-table reducibility. Thismathematical logic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Truth_table_reduction
Incomputer scienceandinformation theory,data differencingordifferential compressionis producing a technical description of the difference between two sets of data – a source and a target. Formally, a data differencing algorithm takes as input source data and target data, and produces difference data such that given the source data and the difference data, one can reconstruct the target data ("patching" the source with the difference to produce the target). One of the best-known examples of data differencing is thediffutility, which produces line-by-line differences oftext files(and in some implementations,binary files, thus being a general-purpose differencing tool). Differencing of general binary files goes under the rubric ofdelta encoding, with a widely used example being the algorithm used inrsync. A standardized generic differencing format isVCDIFF, implemented in such utilities asXdeltaversion 3. A high-efficiency (small patch files) differencing program is bsdiff, which usesbzip2as a final compression step on the generated delta.[1] Main concerns for data differencing areusabilityandspace efficiency(patch size). If one simply wishes to reconstruct the target given the source and patch, one may simply include the entire target in the patch and "apply" the patch by discarding the source and outputting the target that has been included in the patch; similarly, if the source and target have the same size one may create a simple patch byXORingsource and target. In both these cases, the patch will be as large as the target. As these examples show, if the only concern is reconstruction of target, this is easily done, at the expense of a large patch, and the main concern for general-purpose binary differencing is reducing the patch size. For structured data especially, one has other concerns, which largely fall under "usability" – for example, if one iscomparingtwo documents, one generally wishes to knowwhichsections have changed, or if some sections have been moved around – one wishes to understandhowthe documents differ. For instance "here 'cat' was changed to 'dog', and paragraph 13 was moved to paragraph 14". One may also wish to haverobustdifferences – for example, if two documents A and B differ in paragraph 13, one may wish to be able to apply this patch even if one has changed paragraph 7 of A. An example of this is in diff, which shows which lines changed, and where the context format allows robustness and improves human readability. Other concerns include computational efficiency, as for data compression – finding a small patch can be very time and memory intensive. Best results occur when one has knowledge of the data being compared and other constraints:diffis designed for line-oriented text files, particularly source code, and works best for these; thersyncalgorithm is used based on source and target being across a network from each other and communication being slow, so it minimizes data that must be transmitted; and the updates forGoogle Chromeuse an algorithm customized to the archive and executable format of the program's data.[2][3] Data compressioncan be seen as a special case of data differencing[4][5]– data differencing consists of producing adifferencegiven asourceand atarget, with patching producing atargetgiven asourceand adifference,while data compression consists of producing a compressed file given a target, and decompression consists of producing a target given only a compressed file. Thus, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a "difference from nothing". This is the same as considering absoluteentropy(corresponding to data compression) as a special case ofrelative entropy(corresponding to data differencing) with no initial data. When one wishes to emphasize the connection, one may use the termdifferential compressionto refer to data differencing. A dictionary translating between the terminology of the two fields is given as:
https://en.wikipedia.org/wiki/Data_differencing
Instatistics,probability theoryandinformation theory,pointwise mutual information(PMI),[1]orpoint mutual information, is a measure ofassociation. It compares the probability of two events occurring together to what this probability would be if the events wereindependent.[2] PMI (especially in itspositive pointwise mutualinformationvariant) has been described as "one of the most important concepts inNLP", where it "draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in [a] corpus than we would have expected them to appear by chance."[2] The concept was introduced in 1961 byRobert Fanounder the name of "mutual information", but today that term is instead used for a related measure of dependence between random variables:[2]Themutual information(MI) of two discrete random variables refers to the average PMI of all possible events. The PMI of a pair ofoutcomesxandybelonging todiscrete random variablesXandYquantifies the discrepancy between the probability of their coincidence given theirjoint distributionand their individual distributions, assumingindependence. Mathematically:[2] (with the latter two expressions being equal to the first byBayes' theorem). Themutual information(MI) of the random variablesXandYis the expected value of the PMI (over all possible outcomes). The measure is symmetric (pmi⁡(x;y)=pmi⁡(y;x){\displaystyle \operatorname {pmi} (x;y)=\operatorname {pmi} (y;x)}). It can take positive or negative values, but is zero ifXandYareindependent. Note that even though PMI may be negative or positive, its expected outcome over all joint events (MI) is non-negative. PMI maximizes whenXandYare perfectly associated (i.e.p(x|y){\displaystyle p(x|y)}orp(y|x)=1{\displaystyle p(y|x)=1}), yielding the following bounds: Finally,pmi⁡(x;y){\displaystyle \operatorname {pmi} (x;y)}will increase ifp(x|y){\displaystyle p(x|y)}is fixed butp(x){\displaystyle p(x)}decreases. Here is an example to illustrate: Using this table we canmarginalizeto get the following additional table for the individual distributions: With this example, we can compute four values forpmi⁡(x;y){\displaystyle \operatorname {pmi} (x;y)}. Using base-2 logarithms: (For reference, themutual informationI⁡(X;Y){\displaystyle \operatorname {I} (X;Y)}would then be 0.2141709.) Pointwise Mutual Information has many of the same relationships as the mutual information. In particular, pmi⁡(x;y)=h(x)+h(y)−h(x,y)=h(x)−h(x∣y)=h(y)−h(y∣x){\displaystyle {\begin{aligned}\operatorname {pmi} (x;y)&=&h(x)+h(y)-h(x,y)\\&=&h(x)-h(x\mid y)\\&=&h(y)-h(y\mid x)\end{aligned}}} Whereh(x){\displaystyle h(x)}is theself-information, or−log2⁡p(x){\displaystyle -\log _{2}p(x)}. Several variations of PMI have been proposed, in particular to address what has been described as its "two main limitations":[3] The positive pointwise mutual information (PPMI) measure is defined by setting negative values of PMI to zero:[2] ppmi⁡(x;y)≡max(log2⁡p(x,y)p(x)p(y),0){\displaystyle \operatorname {ppmi} (x;y)\equiv \max \left(\log _{2}{\frac {p(x,y)}{p(x)p(y)}},0\right)} This definition is motivated by the observation that "negative PMI values (which imply things are co-occurring less often than we would expect by chance) tend to be unreliable unless our corpora are enormous" and also by a concern that "it's not clear whether it's even possible to evaluate such scores of 'unrelatedness' with human judgment".[2]It also avoids having to deal with−∞{\displaystyle -\infty }values for events that never occur together (p(x,y)=0{\displaystyle p(x,y)=0}), by setting PPMI for these to 0.[2] Pointwise mutual information can be normalized between [-1,+1] resulting in -1 (in the limit) for never occurring together, 0 for independence, and +1 for completeco-occurrence.[4] npmi⁡(x;y)=pmi⁡(x;y)h(x,y){\displaystyle \operatorname {npmi} (x;y)={\frac {\operatorname {pmi} (x;y)}{h(x,y)}}} Whereh(x,y){\displaystyle h(x,y)}is the jointself-information−log2⁡p(x,y){\displaystyle -\log _{2}p(x,y)}. The PMIkmeasure (for k=2, 3 etc.), which was introduced byBéatrice Daillearound 1994, and as of 2011 was described as being "among the most widely used variants", is defined as[5][3] pmik⁡(x;y)≡log2⁡p(x,y)kp(x)p(y)=pmi⁡(x;y)−(−(k−1)log2⁡p(x,y)){\displaystyle \operatorname {pmi} ^{k}(x;y)\equiv \log _{2}{\frac {p(x,y)^{k}}{p(x)p(y)}}=\operatorname {pmi} (x;y)-(-(k-1)\log _{2}p(x,y))} In particular,pmi1(x;y)=pmi(x;y){\displaystyle pmi^{1}(x;y)=pmi(x;y)}. The additional factors ofp(x,y){\displaystyle p(x,y)}inside the logarithm are intended to correct the bias of PMI towards low-frequency events, by boosting the scores of frequent pairs.[3]A 2011 case study demonstrated the success of PMI3in correcting this bias on a corpus drawn from English Wikipedia. Taking x to be the word "football", its most strongly associated words y according to the PMI measure (i.e. those maximizingpmi(x;y){\displaystyle pmi(x;y)}) were domain-specific ("midfielder", "cornerbacks", "goalkeepers") whereas the terms ranked most highly by PMI3were much more general ("league", "clubs", "england").[3] Total correlationis an extension ofmutual informationto multi-variables. Analogously to the definition of total correlation, the extension of PMI to multi-variables is "specific correlation."[6]The SI of the results of random variablesx=(x1,x2,…,xn){\displaystyle {\boldsymbol {x}}=(x_{1},x_{2},\ldots {},x_{n})}is expressed as the following: Likemutual information,[7]point mutual information follows thechain rule, that is, This is proven through application ofBayes' theorem: PMI could be used in various disciplines e.g. in information theory, linguistics or chemistry (in profiling and analysis of chemical compounds).[8]Incomputational linguistics, PMI has been used for findingcollocationsand associations between words. For instance,countingsof occurrences andco-occurrencesof words in atext corpuscan be used to approximate the probabilitiesp(x){\displaystyle p(x)}andp(x,y){\displaystyle p(x,y)}respectively. The following table shows counts of pairs of words getting the most and the least PMI scores in the first 50 millions of words in Wikipedia (dump of October 2015)[citation needed]filtering by 1,000 or more co-occurrences. The frequency of each count can be obtained by dividing its value by 50,000,952. (Note: natural log is used to calculate the PMI values in this example, instead of log base 2) Good collocation pairs have high PMI because the probability of co-occurrence is only slightly lower than the probabilities of occurrence of each word. Conversely, a pair of words whose probabilities of occurrence are considerably higher than their probability of co-occurrence gets a small PMI score.
https://en.wikipedia.org/wiki/Pointwise_mutual_information
Inquantum information theory,quantum mutual information, orvon Neumann mutual information, afterJohn von Neumann, is a measure of correlation between subsystems of quantum state. It is the quantum mechanical analog ofShannonmutual information. For simplicity, it will be assumed that all objects in the article are finite-dimensional. The definition of quantum mutual entropy is motivated by the classical case. For a probability distribution of two variablesp(x,y), the two marginal distributions are The classical mutual informationI(X:Y) is defined by whereS(q) denotes theShannon entropyof the probability distributionq. One can calculate directly So the mutual information is Where the logarithm is taken in basis 2 to obtain the mutual information inbits. But this is precisely therelative entropybetweenp(x,y) andp(x)p(y). In other words, if we assume the two variablesxandyto be uncorrelated, mutual information is thediscrepancy in uncertaintyresulting from this (possibly erroneous) assumption. It follows from the property of relative entropy thatI(X:Y) ≥ 0 and equality holds if and only ifp(x,y) =p(x)p(y). The quantum mechanical counterpart of classical probability distributions are modeled withdensity matrices. Consider a quantum system that can be divided into two parts, A and B, such that independent measurements can be made on either part. The state space of the entire quantum system is then thetensor product of the spaces for the two parts. LetρABbe a density matrix acting on states inHAB. Thevon Neumann entropyof a density matrix S(ρ), is the quantum mechanical analogy of the Shannon entropy. For a probability distributionp(x,y), the marginal distributions are obtained by integrating away the variablesxory. The corresponding operation for density matrices is thepartial trace. So one can assign toρa state on the subsystemAby where TrBis partial trace with respect to systemB. This is thereduced stateofρABon systemA. Thereduced von Neumann entropyofρABwith respect to systemAis S(ρB) is defined in the same way. It can now be seen that the definition of quantum mutual information, corresponding to the classical definition, should be as follows. Quantum mutual information can be interpreted the same way as in the classical case: it can be shown that whereS(⋅‖⋅){\displaystyle S(\cdot \|\cdot )}denotesquantum relative entropy. Note that there is an alternative generalization of mutual information to the quantum case. The difference between the two for a given state is calledquantum discord, a measure for the quantum correlations of the state in question. When the stateρAB{\displaystyle \rho ^{AB}}is pure (and thusS(ρAB)=0{\displaystyle S(\rho ^{AB})=0}), the mutual information is twice theentanglement entropyof the state: A positive quantum mutual information is not necessarily indicative of entanglement, however. A classical mixture ofseparable stateswill always have zero entanglement, but can have nonzero QMI, such as In this case, the state is merely aclassically correlatedstate.
https://en.wikipedia.org/wiki/Quantum_mutual_information
Ininformation theory,specific-informationis thegeneric namegiven to the family of state-dependent measures that in expectation converge to themutual information. There are currently three known varieties of specific information usually denotedIV{\displaystyle I_{V}},IS{\displaystyle I_{S}}, andIssi{\displaystyle I_{ssi}}. The specific-information between arandom variableX{\displaystyle X}and a stateY=y{\displaystyle Y=y}is written as :I(X;Y=y){\displaystyle I(X;Y=y)}.
https://en.wikipedia.org/wiki/Specific-information
Ininformation theory, thelimiting density of discrete pointsis an adjustment to the formula ofClaude Shannonfordifferential entropy. It was formulated byEdwin Thompson Jaynesto address defects in the initial definition of differential entropy. Shannon originally wrote down the following formula for theentropyof a continuous distribution, known asdifferential entropy: Unlike Shannon's formula for the discrete entropy, however, this is not the result of any derivation (Shannon simply replaced the summation symbol in the discrete version with an integral), and it lacks many of the properties that make the discrete entropy a useful measure of uncertainty. In particular, it is not invariant under achange of variablesand can become negative. In addition, it is not even dimensionally correct. Sinceh(X){\displaystyle h(X)}would be dimensionless,p(x){\displaystyle p(x)}must have units of1dx{\displaystyle {\frac {1}{dx}}}, which means that the argument to the logarithm is not dimensionless as required. Jaynes argued that the formula for the continuous entropy should be derived by taking the limit of increasingly dense discrete distributions.[1][2]Suppose that we have a set ofN{\displaystyle N}discrete points{xi}{\displaystyle \{x_{i}\}}, such that in the limitN→∞{\displaystyle N\to \infty }their density approaches a functionm(x){\displaystyle m(x)}called the "invariant measure": Jaynes derived from this the following formula for the continuous entropy, which he argued should be taken as the correct formula: Typically, when this is written, the termlog⁡(N){\displaystyle \log(N)}is omitted, as that would typically not be finite. So the actual common definition is Where it is unclear whether or not thelog⁡(N){\displaystyle \log(N)}term should be omitted, one could write Notice that in Jaynes' formula,m(x){\displaystyle m(x)}is a probability density. For any finiteN{\displaystyle N},m(x){\displaystyle m(x)}represents a uniform density over the quantized continuous space used in the Riemann sum.[further explanation needed]In the limit,m(x){\displaystyle m(x)}is the continuous limiting density of points in the quantization used to represent the continuous variablex{\displaystyle x}. Suppose one had a number format that took onN{\displaystyle N}possible values, distributed as perm(x){\displaystyle m(x)}. ThenHN(X){\displaystyle H_{N}(X)}(ifN{\displaystyle N}is large enough that the continuous approximation is valid) is the discrete entropy of the variablex{\displaystyle x}in this encoding. This is equal to the average number of bits required to transmit this information, and is no more thanlog⁡(N){\displaystyle \log(N)}. Therefore,H(X){\displaystyle H(X)}may be thought of as the amount of information gained by knowing that the variablex{\displaystyle x}follows the distributionp(x){\displaystyle p(x)}, and is not uniformly distributed over the possible quantized values, as would be the case if it followedm(x){\displaystyle m(x)}.H(X){\displaystyle H(X)}is actually the (negative)Kullback–Leibler divergencefromm(x){\displaystyle m(x)}top(x){\displaystyle p(x)}, which is thought of as the information gained by learning that a variable previously thought to be distributed asm(x){\displaystyle m(x)}is actually distributed asp(x){\displaystyle p(x)}. Jaynes' continuous entropy formula has the property of being invariant under a change of variables, provided thatm(x){\displaystyle m(x)}andp(x){\displaystyle p(x)}are transformed in the same way. (This motivates the name "invariant measure" form.) This solves many of the difficulties that come from applying Shannon's continuous entropy formula. Jaynes himself dropped thelog⁡(N){\displaystyle \log(N)}term as it was not relevant to his work (maximum entropy distributions), and it is somewhat awkward to have an infinite term in the calculation. Unfortunately, this cannot be helped if the quantization is made arbitrarily fine, as would be the case in the continuous limit. Note thatH(X){\displaystyle H(X)}as defined here (without thelog⁡(N){\displaystyle \log(N)}term) would always be non-positive, because a KL divergence would always be non-negative. If it is the case thatm(x){\displaystyle m(x)}is constant over some interval of sizer{\displaystyle r}, andp(x){\displaystyle p(x)}is essentially zero outside that interval, then the limiting density of discrete points (LDDP) is closely related to the differential entropyh(X){\displaystyle h(X)}:
https://en.wikipedia.org/wiki/Limiting_density_of_discrete_points
Ininformation theory, theinformation content,self-information,surprisal, orShannon informationis a basic quantity derived from theprobabilityof a particulareventoccurring from arandom variable. It can be thought of as an alternative way of expressing probability, much likeoddsorlog-odds, but which has particular mathematical advantages in the setting of information theory. The Shannon information can be interpreted as quantifying the level of "surprise" of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimalsource codingof the random variable. The Shannon information is closely related toentropy, which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is "on average". This is the average amount of self-information an observer would expect to gain about a random variable when measuring it.[1] The information content can be expressed in variousunits of information, of which the most common is the "bit" (more formally called theshannon), as explained below. The term 'perplexity' has been used in language modelling to quantify the uncertainty inherent in a set of prospective events.[citation needed] Claude Shannon's definition of self-information was chosen to meet several axioms: The detailed derivation is below, but it can be shown that there is a unique function of probability that meets these three axioms, up to a multiplicative scaling factor. Broadly, given a real numberb>1{\displaystyle b>1}and aneventx{\displaystyle x}withprobabilityP{\displaystyle P}, the information content is defined as follows:I(x):=−logb⁡[Pr(x)]=−logb⁡(P).{\displaystyle \mathrm {I} (x):=-\log _{b}{\left[\Pr {\left(x\right)}\right]}=-\log _{b}{\left(P\right)}.} The basebcorresponds to the scaling factor above. Different choices ofbcorrespond to different units of information: whenb= 2, the unit is theshannon(symbol Sh), often called a 'bit'; whenb=e, the unit is thenatural unit of information(symbol nat); and whenb= 10, the unit is thehartley(symbol Hart). Formally, given a discrete random variableX{\displaystyle X}withprobability mass functionpX(x){\displaystyle p_{X}{\left(x\right)}}, the self-information of measuringX{\displaystyle X}asoutcomex{\displaystyle x}is defined as[2]IX⁡(x):=−log⁡[pX(x)]=log⁡(1pX(x)).{\displaystyle \operatorname {I} _{X}(x):=-\log {\left[p_{X}{\left(x\right)}\right]}=\log {\left({\frac {1}{p_{X}{\left(x\right)}}}\right)}.} The use of the notationIX(x){\displaystyle I_{X}(x)}for self-information above is not universal. Since the notationI(X;Y){\displaystyle I(X;Y)}is also often used for the related quantity ofmutual information, many authors use a lowercasehX(x){\displaystyle h_{X}(x)}for self-entropy instead, mirroring the use of the capitalH(X){\displaystyle H(X)}for the entropy. For a givenprobability space, the measurement of rarereventsare intuitively more "surprising", and yield more information content, than more common values. Thus, self-information is astrictly decreasing monotonic functionof the probability, or sometimes called an "antitonic" function. While standard probabilities are represented by real numbers in the interval[0,1]{\displaystyle [0,1]}, self-informations are represented byextended real numbersin the interval[0,∞]{\displaystyle [0,\infty ]}. In particular, we have the following, for any choice of logarithmic base: From this, we can get a few general properties: The Shannon information is closely related to thelog-odds. In particular, given some eventx{\displaystyle x}, suppose thatp(x){\displaystyle p(x)}is the probability ofx{\displaystyle x}occurring, and thatp(¬x)=1−p(x){\displaystyle p(\lnot x)=1-p(x)}is the probability ofx{\displaystyle x}not occurring. Then we have the following definition of the log-odds:log-odds(x)=log⁡(p(x)p(¬x)){\displaystyle {\text{log-odds}}(x)=\log \left({\frac {p(x)}{p(\lnot x)}}\right)} This can be expressed as a difference of two Shannon informations:log-odds(x)=I(¬x)−I(x){\displaystyle {\text{log-odds}}(x)=\mathrm {I} (\lnot x)-\mathrm {I} (x)} In other words, the log-odds can be interpreted as the level of surprise when the eventdoesn'thappen, minus the level of surprise when the eventdoeshappen. The information content of twoindependent eventsis the sum of each event's information content. This property is known asadditivityin mathematics, andsigma additivityin particular inmeasureand probability theory. Consider twoindependent random variablesX,Y{\textstyle X,\,Y}withprobability mass functionspX(x){\displaystyle p_{X}(x)}andpY(y){\displaystyle p_{Y}(y)}respectively. Thejoint probability mass functionis pX,Y(x,y)=Pr(X=x,Y=y)=pX(x)pY(y){\displaystyle p_{X,Y}\!\left(x,y\right)=\Pr(X=x,\,Y=y)=p_{X}\!(x)\,p_{Y}\!(y)} becauseX{\textstyle X}andY{\textstyle Y}areindependent. The information content of theoutcome(X,Y)=(x,y){\displaystyle (X,Y)=(x,y)}isIX,Y⁡(x,y)=−log2⁡[pX,Y(x,y)]=−log2⁡[pX(x)pY(y)]=−log2⁡[pX(x)]−log2⁡[pY(y)]=IX⁡(x)+IY⁡(y){\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}(x,y)&=-\log _{2}\left[p_{X,Y}(x,y)\right]=-\log _{2}\left[p_{X}\!(x)p_{Y}\!(y)\right]\\[5pt]&=-\log _{2}\left[p_{X}{(x)}\right]-\log _{2}\left[p_{Y}{(y)}\right]\\[5pt]&=\operatorname {I} _{X}(x)+\operatorname {I} _{Y}(y)\end{aligned}}}See§ Two independent, identically distributed dicebelow for an example. The corresponding property forlikelihoodsis that thelog-likelihoodof independent events is the sum of the log-likelihoods of each event. Interpreting log-likelihood as "support" or negative surprisal (the degree to which an event supports a given model: a model is supported by an event to the extent that the event is unsurprising, given the model), this states that independent events add support: the information that the two events together provide for statistical inference is the sum of their independent information. TheShannon entropyof the random variableX{\displaystyle X}above isdefined asH(X)=∑x−pX(x)log⁡pX(x)=∑xpX(x)IX⁡(x)=defE⁡[IX⁡(X)],{\displaystyle {\begin{alignedat}{2}\mathrm {H} (X)&=\sum _{x}{-p_{X}{\left(x\right)}\log {p_{X}{\left(x\right)}}}\\&=\sum _{x}{p_{X}{\left(x\right)}\operatorname {I} _{X}(x)}\\&{\overset {\underset {\mathrm {def} }{}}{=}}\ \operatorname {E} {\left[\operatorname {I} _{X}(X)\right]},\end{alignedat}}}by definition equal to theexpectedinformation content of measurement ofX{\displaystyle X}.[3]: 11[4]: 19–20The expectation is taken over thediscrete valuesover itssupport. Sometimes, the entropy itself is called the "self-information" of the random variable, possibly because the entropy satisfiesH(X)=I⁡(X;X){\displaystyle \mathrm {H} (X)=\operatorname {I} (X;X)}, whereI⁡(X;X){\displaystyle \operatorname {I} (X;X)}is themutual informationofX{\displaystyle X}with itself.[5] Forcontinuous random variablesthe corresponding concept isdifferential entropy. This measure has also been calledsurprisal, as it represents the "surprise" of seeing the outcome (a highly improbable outcome is very surprising). This term (as a log-probability measure) was introduced byEdward W. Samsonin his 1951 report "Fundamental natural concepts of information theory".[6][7]An early appearance in the Physics literature is inMyron Tribus' 1961 bookThermostatics and Thermodynamics.[8][9] When the event is a random realization (of a variable) the self-information of the variable is defined as theexpected valueof the self-information of the realization.[citation needed] Consider theBernoulli trialoftossing a fair coinX{\displaystyle X}. Theprobabilitiesof theeventsof the coin landing as headsH{\displaystyle {\text{H}}}and tailsT{\displaystyle {\text{T}}}(seefair coinandobverse and reverse) areone halfeach,pX(H)=pX(T)=12=0.5{\textstyle p_{X}{({\text{H}})}=p_{X}{({\text{T}})}={\tfrac {1}{2}}=0.5}. Uponmeasuringthe variable as heads, the associated information gain isIX⁡(H)=−log2⁡pX(H)=−log212=1,{\displaystyle \operatorname {I} _{X}({\text{H}})=-\log _{2}{p_{X}{({\text{H}})}}=-\log _{2}\!{\tfrac {1}{2}}=1,}so the information gain of a fair coin landing as heads is 1shannon.[2]Likewise, the information gain of measuring tailsT{\displaystyle T}isIX⁡(T)=−log2⁡pX(T)=−log2⁡12=1Sh.{\displaystyle \operatorname {I} _{X}(T)=-\log _{2}{p_{X}{({\text{T}})}}=-\log _{2}{\tfrac {1}{2}}=1{\text{ Sh}}.} Suppose we have afair six-sided die. The value of a die roll is adiscrete uniform random variableX∼DU[1,6]{\displaystyle X\sim \mathrm {DU} [1,6]}withprobability mass functionpX(k)={16,k∈{1,2,3,4,5,6}0,otherwise{\displaystyle p_{X}(k)={\begin{cases}{\frac {1}{6}},&k\in \{1,2,3,4,5,6\}\\0,&{\text{otherwise}}\end{cases}}}The probability of rolling a 4 ispX(4)=16{\textstyle p_{X}(4)={\frac {1}{6}}}, as for any other valid roll. The information content of rolling a 4 is thusIX⁡(4)=−log2⁡pX(4)=−log2⁡16≈2.585Sh{\displaystyle \operatorname {I} _{X}(4)=-\log _{2}{p_{X}{(4)}}=-\log _{2}{\tfrac {1}{6}}\approx 2.585\;{\text{Sh}}}of information. Suppose we have twoindependent, identically distributed random variablesX,Y∼DU[1,6]{\textstyle X,\,Y\sim \mathrm {DU} [1,6]}each corresponding to anindependentfair 6-sided dice roll. Thejoint distributionofX{\displaystyle X}andY{\displaystyle Y}ispX,Y(x,y)=Pr(X=x,Y=y)=pX(x)pY(y)={136,x,y∈[1,6]∩N0otherwise.{\displaystyle {\begin{aligned}p_{X,Y}\!\left(x,y\right)&{}=\Pr(X=x,\,Y=y)=p_{X}\!(x)\,p_{Y}\!(y)\\&{}={\begin{cases}\displaystyle {1 \over 36},\ &x,y\in [1,6]\cap \mathbb {N} \\0&{\text{otherwise.}}\end{cases}}\end{aligned}}} The information content of therandom variate(X,Y)=(2,4){\displaystyle (X,Y)=(2,\,4)}isIX,Y⁡(2,4)=−log2[pX,Y(2,4)]=log236=2log26≈5.169925Sh,{\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}{(2,4)}&=-\log _{2}\!{\left[p_{X,Y}{(2,4)}\right]}=\log _{2}\!{36}=2\log _{2}\!{6}\\&\approx 5.169925{\text{ Sh}},\end{aligned}}}and can also be calculated byadditivity of eventsIX,Y⁡(2,4)=−log2[pX,Y(2,4)]=−log2[pX(2)]−log2[pY(4)]=2log26≈5.169925Sh.{\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}{(2,4)}&=-\log _{2}\!{\left[p_{X,Y}{(2,4)}\right]}=-\log _{2}\!{\left[p_{X}(2)\right]}-\log _{2}\!{\left[p_{Y}(4)\right]}\\&=2\log _{2}\!{6}\\&\approx 5.169925{\text{ Sh}}.\end{aligned}}} If we receive information about the value of the dicewithout knowledgeof which die had which value, we can formalize the approach with so-called counting variablesCk:=δk(X)+δk(Y)={0,¬(X=k∨Y=k)1,X=k⊻Y=k2,X=k∧Y=k{\displaystyle C_{k}:=\delta _{k}(X)+\delta _{k}(Y)={\begin{cases}0,&\neg \,(X=k\vee Y=k)\\1,&\quad X=k\,\veebar \,Y=k\\2,&\quad X=k\,\wedge \,Y=k\end{cases}}}fork∈{1,2,3,4,5,6}{\displaystyle k\in \{1,2,3,4,5,6\}}, then∑k=16Ck=2{\textstyle \sum _{k=1}^{6}{C_{k}}=2}and the counts have themultinomial distributionf(c1,…,c6)=Pr(C1=c1and…andC6=c6)={1181c1!⋯ck!,when∑i=16ci=20otherwise,={118,when 2ckare1136,when exactly oneck=20,otherwise.{\displaystyle {\begin{aligned}f(c_{1},\ldots ,c_{6})&{}=\Pr(C_{1}=c_{1}{\text{ and }}\dots {\text{ and }}C_{6}=c_{6})\\&{}={\begin{cases}{\displaystyle {1 \over {18}}{1 \over c_{1}!\cdots c_{k}!}},\ &{\text{when }}\sum _{i=1}^{6}c_{i}=2\\0&{\text{otherwise,}}\end{cases}}\\&{}={\begin{cases}{1 \over 18},\ &{\text{when 2 }}c_{k}{\text{ are }}1\\{1 \over 36},\ &{\text{when exactly one }}c_{k}=2\\0,\ &{\text{otherwise.}}\end{cases}}\end{aligned}}} To verify this, the 6 outcomes(X,Y)∈{(k,k)}k=16={(1,1),(2,2),(3,3),(4,4),(5,5),(6,6)}{\textstyle (X,Y)\in \left\{(k,k)\right\}_{k=1}^{6}=\left\{(1,1),(2,2),(3,3),(4,4),(5,5),(6,6)\right\}}correspond to the eventCk=2{\displaystyle C_{k}=2}and atotal probabilityof⁠1/6⁠. These are the only events that are faithfully preserved with identity of which dice rolled which outcome because the outcomes are the same. Without knowledge to distinguish the dice rolling the other numbers, the other(62)=15{\textstyle {\binom {6}{2}}=15}combinationscorrespond to one die rolling one number and the other die rolling a different number, each having probability⁠1/18⁠. Indeed,6⋅136+15⋅118=1{\textstyle 6\cdot {\tfrac {1}{36}}+15\cdot {\tfrac {1}{18}}=1}, as required. Unsurprisingly, the information content of learning that both dice were rolled as the same particular number is more than the information content of learning that one dice was one number and the other was a different number. Take for examples the eventsAk={(X,Y)=(k,k)}{\displaystyle A_{k}=\{(X,Y)=(k,k)\}}andBj,k={cj=1}∩{ck=1}{\displaystyle B_{j,k}=\{c_{j}=1\}\cap \{c_{k}=1\}}forj≠k,1≤j,k≤6{\displaystyle j\neq k,1\leq j,k\leq 6}. For example,A2={X=2andY=2}{\displaystyle A_{2}=\{X=2{\text{ and }}Y=2\}}andB3,4={(3,4),(4,3)}{\displaystyle B_{3,4}=\{(3,4),(4,3)\}}. The information contents areI⁡(A2)=−log2136=5.169925Sh{\displaystyle \operatorname {I} (A_{2})=-\log _{2}\!{\tfrac {1}{36}}=5.169925{\text{ Sh}}}I⁡(B3,4)=−log2118=4.169925Sh{\displaystyle \operatorname {I} \left(B_{3,4}\right)=-\log _{2}\!{\tfrac {1}{18}}=4.169925{\text{ Sh}}} LetSame=⋃i=16Ai{\textstyle {\text{Same}}=\bigcup _{i=1}^{6}{A_{i}}}be the event that both dice rolled the same value andDiff=Same¯{\displaystyle {\text{Diff}}={\overline {\text{Same}}}}be the event that the dice differed. ThenPr(Same)=16{\textstyle \Pr({\text{Same}})={\tfrac {1}{6}}}andPr(Diff)=56{\textstyle \Pr({\text{Diff}})={\tfrac {5}{6}}}. The information contents of the events areI⁡(Same)=−log216=2.5849625Sh{\displaystyle \operatorname {I} ({\text{Same}})=-\log _{2}\!{\tfrac {1}{6}}=2.5849625{\text{ Sh}}}I⁡(Diff)=−log256=0.2630344Sh.{\displaystyle \operatorname {I} ({\text{Diff}})=-\log _{2}\!{\tfrac {5}{6}}=0.2630344{\text{ Sh}}.} The probability mass or density function (collectivelyprobability measure) of thesum of two independent random variablesis the convolution of each probability measure. In the case of independent fair 6-sided dice rolls, the random variableZ=X+Y{\displaystyle Z=X+Y}has probability mass functionpZ(z)=pX(x)∗pY(y)=6−|z−7|36{\textstyle p_{Z}(z)=p_{X}(x)*p_{Y}(y)={6-|z-7| \over 36}}, where∗{\displaystyle *}represents thediscrete convolution. TheoutcomeZ=5{\displaystyle Z=5}has probabilitypZ(5)=436=19{\textstyle p_{Z}(5)={\frac {4}{36}}={1 \over 9}}. Therefore, the information asserted isIZ⁡(5)=−log2⁡19=log2⁡9≈3.169925Sh.{\displaystyle \operatorname {I} _{Z}(5)=-\log _{2}{\tfrac {1}{9}}=\log _{2}{9}\approx 3.169925{\text{ Sh}}.} Generalizing the§ Fair dice rollexample above, consider a generaldiscrete uniform random variable(DURV)X∼DU[a,b];a,b∈Z,b≥a.{\displaystyle X\sim \mathrm {DU} [a,b];\quad a,b\in \mathbb {Z} ,\ b\geq a.}For convenience, defineN:=b−a+1{\textstyle N:=b-a+1}. Theprobability mass functionispX(k)={1N,k∈[a,b]∩Z0,otherwise.{\displaystyle p_{X}(k)={\begin{cases}{\frac {1}{N}},&k\in [a,b]\cap \mathbb {Z} \\0,&{\text{otherwise}}.\end{cases}}}In general, the values of the DURV need not beintegers, or for the purposes of information theory even uniformly spaced; they need only beequiprobable.[2]The information gain of any observationX=k{\displaystyle X=k}isIX⁡(k)=−log2⁡1N=log2⁡NSh.{\displaystyle \operatorname {I} _{X}(k)=-\log _{2}{\frac {1}{N}}=\log _{2}{N}{\text{ Sh}}.} Ifb=a{\displaystyle b=a}above,X{\displaystyle X}degeneratesto aconstant random variablewith probability distribution deterministically given byX=b{\displaystyle X=b}and probability measure theDirac measurepX(k)=δb(k){\textstyle p_{X}(k)=\delta _{b}(k)}. The only valueX{\displaystyle X}can take isdeterministicallyb{\displaystyle b}, so the information content of any measurement ofX{\displaystyle X}isIX⁡(b)=−log2⁡1=0.{\displaystyle \operatorname {I} _{X}(b)=-\log _{2}{1}=0.}In general, there is no information gained from measuring a known value.[2] Generalizing all of the above cases, consider acategoricaldiscrete random variablewithsupportS={si}i=1N{\textstyle {\mathcal {S}}={\bigl \{}s_{i}{\bigr \}}_{i=1}^{N}}andprobability mass functiongiven by pX(k)={pi,k=si∈S0,otherwise.{\displaystyle p_{X}(k)={\begin{cases}p_{i},&k=s_{i}\in {\mathcal {S}}\\0,&{\text{otherwise}}.\end{cases}}} For the purposes of information theory, the valuess∈S{\displaystyle s\in {\mathcal {S}}}do not have to benumbers; they can be anymutually exclusiveeventson ameasure spaceoffinite measurethat has beennormalizedto aprobability measurep{\displaystyle p}.Without loss of generality, we can assume the categorical distribution is supported on the set[N]={1,2,…,N}{\textstyle [N]=\left\{1,2,\dots ,N\right\}}; the mathematical structure isisomorphicin terms ofprobability theoryand thereforeinformation theoryas well. The information of the outcomeX=x{\displaystyle X=x}is given IX⁡(x)=−log2⁡pX(x).{\displaystyle \operatorname {I} _{X}(x)=-\log _{2}{p_{X}(x)}.} From these examples, it is possible to calculate the information of any set ofindependentDRVswith knowndistributionsbyadditivity. By definition, information is transferred from an originating entity possessing the information to a receiving entity only when the receiver had not known the informationa priori. If the receiving entity had previously known the content of a message with certainty before receiving the message, the amount of information of the message received is zero. Only when the advance knowledge of the content of the message by the receiver is less than 100% certain does the message actually convey information. For example, quoting a character (the Hippy Dippy Weatherman) of comedianGeorge Carlin: Weather forecast for tonight: dark.Continued dark overnight, with widely scattered light by morning.[10] Assuming that one does not reside near thepolar regions, the amount of information conveyed in that forecast is zero because it is known, in advance of receiving the forecast, that darkness always comes with the night. Accordingly, the amount of self-information contained in a message conveying content informing an occurrence ofevent,ωn{\displaystyle \omega _{n}}, depends only on the probability of that event. I⁡(ωn)=f(P⁡(ωn)){\displaystyle \operatorname {I} (\omega _{n})=f(\operatorname {P} (\omega _{n}))}for some functionf(⋅){\displaystyle f(\cdot )}to be determined below. IfP⁡(ωn)=1{\displaystyle \operatorname {P} (\omega _{n})=1}, thenI⁡(ωn)=0{\displaystyle \operatorname {I} (\omega _{n})=0}. IfP⁡(ωn)<1{\displaystyle \operatorname {P} (\omega _{n})<1}, thenI⁡(ωn)>0{\displaystyle \operatorname {I} (\omega _{n})>0}. Further, by definition, themeasureof self-information is nonnegative and additive. If a message informing of eventC{\displaystyle C}is theintersectionof twoindependenteventsA{\displaystyle A}andB{\displaystyle B}, then the information of eventC{\displaystyle C}occurring is that of the compound message of both independent eventsA{\displaystyle A}andB{\displaystyle B}occurring. The quantity of information of compound messageC{\displaystyle C}would be expected to equal thesumof the amounts of information of the individual component messagesA{\displaystyle A}andB{\displaystyle B}respectively:I⁡(C)=I⁡(A∩B)=I⁡(A)+I⁡(B).{\displaystyle \operatorname {I} (C)=\operatorname {I} (A\cap B)=\operatorname {I} (A)+\operatorname {I} (B).} Because of the independence of eventsA{\displaystyle A}andB{\displaystyle B}, the probability of eventC{\displaystyle C}isP⁡(C)=P⁡(A∩B)=P⁡(A)⋅P⁡(B).{\displaystyle \operatorname {P} (C)=\operatorname {P} (A\cap B)=\operatorname {P} (A)\cdot \operatorname {P} (B).} However, applying functionf(⋅){\displaystyle f(\cdot )}results inI⁡(C)=I⁡(A)+I⁡(B)f(P⁡(C))=f(P⁡(A))+f(P⁡(B))=f(P⁡(A)⋅P⁡(B)){\displaystyle {\begin{aligned}\operatorname {I} (C)&=\operatorname {I} (A)+\operatorname {I} (B)\\f(\operatorname {P} (C))&=f(\operatorname {P} (A))+f(\operatorname {P} (B))\\&=f{\big (}\operatorname {P} (A)\cdot \operatorname {P} (B){\big )}\\\end{aligned}}} Thanks to work onCauchy's functional equation, the only monotone functionsf(⋅){\displaystyle f(\cdot )}having the property such thatf(x⋅y)=f(x)+f(y){\displaystyle f(x\cdot y)=f(x)+f(y)}are thelogarithmfunctionslogb⁡(x){\displaystyle \log _{b}(x)}. The only operational difference between logarithms of different bases is that of different scaling constants, so we may assume f(x)=Klog⁡(x){\displaystyle f(x)=K\log(x)} wherelog{\displaystyle \log }is thenatural logarithm. Since the probabilities of events are always between 0 and 1 and the information associated with these events must be nonnegative, that requires thatK<0{\displaystyle K<0}. Taking into account these properties, the self-informationI⁡(ωn){\displaystyle \operatorname {I} (\omega _{n})}associated with outcomeωn{\displaystyle \omega _{n}}with probabilityP⁡(ωn){\displaystyle \operatorname {P} (\omega _{n})}is defined as:I⁡(ωn)=−log⁡(P⁡(ωn))=log⁡(1P⁡(ωn)){\displaystyle \operatorname {I} (\omega _{n})=-\log(\operatorname {P} (\omega _{n}))=\log \left({\frac {1}{\operatorname {P} (\omega _{n})}}\right)} The smaller the probability of eventωn{\displaystyle \omega _{n}}, the larger the quantity of self-information associated with the message that the event indeed occurred. If the above logarithm is base 2, the unit ofI(ωn){\displaystyle I(\omega _{n})}isshannon. This is the most common practice. When using thenatural logarithmof basee{\displaystyle e}, the unit will be thenat. For the base 10 logarithm, the unit of information is thehartley. As a quick illustration, the information content associated with an outcome of 4 heads (or any specific outcome) in 4 consecutive tosses of a coin would be 4 shannons (probability 1/16), and the information content associated with getting a result other than the one specified would be ~0.09 shannons (probability 15/16). See above for detailed examples.
https://en.wikipedia.org/wiki/Self-information
In various science/engineering applications, such asindependent component analysis,[1]image analysis,[2]genetic analysis,[3]speech recognition,[4]manifold learning,[5]and time delay estimation[6]it is useful to estimate thedifferential entropyof a system or process, given some observations. The simplest and most common approach useshistogram-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks.[7]The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate,[8]although the nature of the (suspected) distribution of the data may also be a factor,[7]as well as the sample size and the size of the alphabet of the probability distribution.[9] The histogram approach uses the idea that the differential entropy of a probability distributionf(x){\displaystyle f(x)}for a continuous random variablex{\displaystyle x}, can be approximated by first approximatingf(x){\displaystyle f(x)}with ahistogramof the observations, and then finding thediscrete entropyof a quantization ofx{\displaystyle x} with bin probabilities given by that histogram. The histogram is itself amaximum-likelihood (ML) estimateof the discretized frequency distribution[citation needed]), wherew{\displaystyle w}is the width of thei{\displaystyle i}th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced isbiased, and although corrections can be made to the estimate, they may not always be satisfactory.[10] A method better suited for multidimensionalprobability density functions(pdf) is to first make apdf estimatewith some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussianmixture modeling(GMM), where theexpectation maximization(EM) algorithm is used to find an ML estimate of aweighted sumof Gaussian pdf's approximating the data pdf. If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (thereciprocalof) the probability density in that region: the closer together the values are, the higher the probability density. This is a very rough estimate with highvariance, but can be improved, for example by thinking about the space between a given value and the onemaway from it, wheremis some fixed number.[7] The probability density estimated in this way can then be used to calculate the entropy estimate, in a similar way to that given above for the histogram, but with some slight tweaks. One of the main drawbacks with this approach is going beyond one dimension: the idea of lining the data points up in order falls apart in more than one dimension. However, using analogous methods, some multidimensional entropy estimators have been developed.[11][12] For each point in our dataset, we can find the distance to itsnearest neighbour. We can in fact estimate the entropy from the distribution of the nearest-neighbour-distance of our datapoints.[7](In a uniform distribution these distances all tend to be fairly similar, whereas in a strongly nonuniform distribution they may vary a lot more.) When in under-sampled regime, having a prior on the distribution can help the estimation. One suchBayesian estimatorwas proposed in the neuroscience context known as the NSB (Nemenman–Shafee–Bialek) estimator.[13][14]The NSB estimator uses a mixture ofDirichlet prior, chosen such that the induced prior over the entropy is approximately uniform. A new approach to the problem of entropy evaluation is to compare the expected entropy of a sample of random sequence with the calculated entropy of the sample. The method gives very accurate results, but it is limited to calculations of random sequences modeled asMarkov chainsof the first order with small values of bias and correlations. This is the first known method that takes into account the size of the sample sequence and its impact on the accuracy of the calculation of entropy.[15][16] A deep neural network (DNN) can be used to estimate the joint entropy and called Neural Joint Entropy Estimator (NJEE).[17]Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in an image classification task, the NJEE maps a vector of pixel values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes.[17][9]
https://en.wikipedia.org/wiki/Entropy_estimation
TheBayes factoris a ratio of two competingstatistical modelsrepresented by theirevidence, and is used to quantify the support for one model over the other.[1]The models in question can have a common set of parameters, such as anull hypothesisand an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to itslinear approximation. The Bayes factor can be thought of as a Bayesian analog to thelikelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values).[2]Also, in contrast withnull hypothesis significance testing, Bayes factors support evaluation of evidencein favorof a null hypothesis, rather than only allowing the null to be rejected or not rejected.[3] Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses.[4]Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based onMCMC sampleshave been suggested.[5]A widely used approach is the method proposed by Chib (1995).[6]Chib and Jeliazkov (2001) later extended this method to handle cases where Metropolis-Hastings samplers are used.[7]For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative.[8][9]Another approximation, derived by applyingLaplace's approximationto the integrated likelihoods, is known as theBayesian information criterion(BIC);[10]in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not beimpropersince the Bayes factor will be undefined if either of the two integrals in its ratio is not finite. The Bayes factor is the ratio of two marginal likelihoods; that is, thelikelihoodsof two statistical models integrated over theprior probabilitiesof their parameters.[11] Theposterior probabilityPr(M|D){\displaystyle \Pr(M|D)}of a modelMgiven dataDis given byBayes' theorem: The key data-dependent termPr(D|M){\displaystyle \Pr(D|M)}represents the probability that some data are produced under the assumption of the modelM; evaluating it correctly is the key to Bayesian model comparison. Given amodel selectionproblem in which one wishes to choose between two models on the basis of observed dataD, the plausibility of the two different modelsM1andM2, parametrised by model parameter vectorsθ1{\displaystyle \theta _{1}}andθ2{\displaystyle \theta _{2}}, is assessed by the Bayes factorKgiven by When the two models have equal prior probability, so thatPr(M1)=Pr(M2){\displaystyle \Pr(M_{1})=\Pr(M_{2})}, the Bayes factor is equal to the ratio of the posterior probabilities ofM1andM2. If instead of the Bayes factor integral, the likelihood corresponding to themaximum likelihood estimateof the parameter for each statistical model is used, then the test becomes a classicallikelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[12]It thus guards againstoverfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically,approximate Bayesian computationcan be used for model selection in a Bayesian framework,[13]with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[14] Other approaches are: A value ofK> 1 means thatM1is more strongly supported by the data under consideration thanM2. Note that classicalhypothesis testinggives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidenceagainstit. The fact that a Bayes factor can produce evidenceforand not just against a null hypothesis is one of the key advantages of this analysis method.[15] Harold Jeffreysgave a scale (Jeffreys' scale) for interpretation ofK{\displaystyle K}:[16] The second column gives the corresponding weights of evidence indecihartleys(also known asdecibans);bitsare added in the third column for clarity. The table continues in the other direction, so that, for example,K≤10−2{\displaystyle K\leq 10^{-2}}is decisive evidence forM2{\displaystyle M_{2}}. An alternative table, widely cited, is provided by Kass and Raftery (1995):[12] According toI. J. Good, thejust-noticeable differenceof humans in their everyday life, when it comes to a changedegree of beliefin a hypothesis, is about a factor of 1.3x, or 1 deciban, or 1/3 of a bit, or from 1:1 to 5:4 in odds ratio.[17] Suppose we have arandom variablethat produces either a success or a failure. We want to compare a modelM1where the probability of success isq=1⁄2, and another modelM2whereqis unknown and we take aprior distributionforqthat isuniformon [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to thebinomial distribution: Thus we have forM1 whereas forM2we have The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towardsM1. Afrequentisthypothesis testofM1(here considered as anull hypothesis) would have produced a very different result. Such a test says thatM1should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 ifq=1⁄2is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas afrequentisthypothesis testwould yieldsignificant resultsat the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test. A classicallikelihood-ratio testwould have found themaximum likelihoodestimate forq, namelyq^=115200=0.575{\displaystyle {\hat {q}}={\frac {115}{200}}=0.575}, whence (rather than averaging over all possibleq). That gives a likelihood ratio of 0.1 and points towardsM2. M2is a more complex model thanM1because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason whyBayesian inferencehas been put forward as a theoretical justification for and generalisation ofOccam's razor, reducingType I errors.[18] On the other hand, the modern method ofrelative likelihoodtakes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. ModelM1has 0 parameters, and so itsAkaike information criterion(AIC) value is2⋅0−2⋅ln⁡(0.005956)≈10.2467{\displaystyle 2\cdot 0-2\cdot \ln(0.005956)\approx 10.2467}. ModelM2has 1 parameter, and so its AIC value is2⋅1−2⋅ln⁡(0.056991)≈7.7297{\displaystyle 2\cdot 1-2\cdot \ln(0.056991)\approx 7.7297}. HenceM1is aboutexp⁡(7.7297−10.24672)≈0.284{\displaystyle \exp \left({\frac {7.7297-10.2467}{2}}\right)\approx 0.284}times as probable asM2to minimize the information loss. ThusM2is slightly preferred, butM1cannot be excluded.
https://en.wikipedia.org/wiki/Bayes_factor
Instatistics, thelikelihood principleis the proposition that, given astatistical model, all the evidence in asamplerelevant to model parameters is contained in thelikelihood function. A likelihood function arises from aprobability density functionconsidered as a function of its distributional parameterization argument. For example, consider a model which gives the probability density functionfX(x∣θ){\displaystyle \;f_{X}(x\mid \theta )\;}of observablerandom variableX{\displaystyle \,X\,}as a function of a parameterθ{\displaystyle \,\theta ~}. Then for a specific valuex{\displaystyle \,x\,}ofX{\displaystyle \,X~}, the functionL(θ∣x)=fX(x∣θ){\displaystyle \,{\mathcal {L}}(\theta \mid x)=f_{X}(x\mid \theta )\;}is a likelihood function ofθ{\displaystyle \,\theta ~}: it gives a measure of how "likely" any particular value ofθ{\displaystyle \,\theta \,}is, if we know thatX{\displaystyle \,X\,}has the valuex{\displaystyle \,x~}. The density function may be a density with respect to counting measure, i.e. aprobability mass function. Two likelihood functions areequivalentif one is a scalar multiple of the other.[a]Thelikelihood principleis this: All information from the data that is relevant to inferences about the value of the model parameters is in the equivalence class to which the likelihood function belongs. Thestrong likelihood principleapplies this same criterion to cases such as sequential experiments where the sample of data that is available results from applying astopping ruleto the observations earlier in the experiment.[1] Suppose Then the observation thatX=3{\displaystyle \ X=3\ }induces the likelihood function while the observation thatY=12{\displaystyle \ Y=12\ }induces the likelihood function The likelihood principle says that, as the data are the same in both cases, the inferences drawn about the value ofθ{\displaystyle \ \theta \ }should also be the same. In addition, all the inferential content in the data about the value ofθ{\displaystyle \ \theta \ }is contained in the two likelihoods, and is the same if they are proportional to one another. This is the case in the above example, reflecting the fact that the difference between observingX=3{\displaystyle \ X=3\ }and observingY=12{\displaystyle \ Y=12\ }lies not in the actual data collected, nor in the conduct of the experimenter, but in the two differentdesigns of the experiment. Specifically, in one case, the decision in advance was to try twelve times, regardless of the outcome; in the other case, the advance decision was to keep trying until three successes were observed.If you support the likelihood principlethen inference aboutθ{\displaystyle \ \theta \ }should be the same for both cases because the two likelihoods are proportional to each other: Except for a constant leading factor of220vs.55, the two likelihood functions are the same – constant multiples of each other. This equivalence is not always the case, however. The use offrequentistmethods involvingpvaluesleads to different inferences for the two cases above,[2]showing that the outcome of frequentist methods depends on the experimental procedure, and thus violates the likelihood principle. A related concept is thelaw of likelihood, the notion that the extent to which the evidence supports one parameter value or hypothesis against another is indicated by the ratio of their likelihoods, theirlikelihood ratio. That is, is the degree to which the observationxsupports parameter value or hypothesisaagainstb. If this ratio is 1, the evidence is indifferent; if greater than 1, the evidence supports the valueaagainstb; or if less, then vice versa. InBayesian statistics, this ratio is known as theBayes factor, andBayes' rulecan be seen as the application of the law of likelihood to inference. Infrequentist inference, the likelihood ratio is used in thelikelihood-ratio test, but other non-likelihood tests are used as well. TheNeyman–Pearson lemmastates the likelihood-ratio test is equallystatistically powerfulas the most powerful test for comparing twosimple hypothesesat a givensignificance level, which gives a frequentist justification for the law of likelihood. Combining the likelihood principle with the law of likelihood yields the consequence that the parameter value which maximizes the likelihood function is the value which is most strongly supported by the evidence. This is the basis for the widely used method ofmaximum likelihood. The likelihood principle was first identified by that name in print in 1962 (Barnardet al.,Birnbaum, and Savageet al.), but arguments for the same principle, unnamed, and the use of the principle in applications goes back to the works ofR.A. Fisherin the 1920s. The law of likelihood was identified by that name byI. Hacking(1965). More recently the likelihood principle as a general principle of inference has been championed byA.W.F. Edwards. The likelihood principle has been applied to thephilosophy of scienceby R. Royall.[3] Birnbaum(1962) initially argued that the likelihood principle follows from two more primitive and seemingly reasonable principles, theconditionality principleand thesufficiency principle: However, upon further consideration Birnbaum rejected both his conditionality principle and the likelihood principle.[4]The adequacy of Birnbaum's original argument has also been contested by others (see below for details). Some widely used methods of conventional statistics, for example manysignificance tests, are not consistent with the likelihood principle. Let us briefly consider some of the arguments for and against the likelihood principle. According to Giere (1977),[5]Birnbaum rejected[4]both his own conditionality principle and the likelihood principle because they were both incompatible with what he called the “confidence concept of statistical evidence”, which Birnbaum (1970) describes as taking “from the Neyman-Pearson approach techniques for systematically appraising and bounding the probabilities (under respective hypotheses) of seriously misleading interpretations of data” ([4]p. 1033). The confidence concept incorporates only limited aspects of the likelihood concept and only some applications of the conditionality concept. Birnbaum later notes that it was the unqualified equivalence formulation of his 1962 version of the conditionality principle that led “to the monster of the likelihood axiom” ([6]p. 263). Birnbaum's original argument for the likelihood principle has also been disputed by other statisticians includingAkaike,[7]Evans[8]and philosophers of science, includingDeborah Mayo.[9][10]Dawidpoints out fundamental differences between Mayo's and Birnbaum's definitions of the conditionality principle, arguing Birnbaum's argument cannot be so readily dismissed.[11]A new proof of the likelihood principle has been provided by Gandenberger that addresses some of the counterarguments to the original proof.[12] Unrealized events play a role in some common statistical methods. For example, the result of asignificance testdepends on thep-value, the probability of a result as extreme or more extreme than the observation, and that probability may depend on the design of the experiment. To the extent that the likelihood principle is accepted, such methods are therefore denied. Some classical significance tests are not based on the likelihood. The following are a simple and more complicated example of those, using a commonly cited example calledtheoptional stoppingproblem. Suppose I tell you that I tossed a coin 12 times and in the process observed 3 heads. You might make some inference about the probability of heads and whether the coin was fair. Suppose now I tell that I tossed the coinuntilI observed 3 heads, and I tossed it 12 times. Will you now make some different inference? The likelihood function is the same in both cases: It is proportional to So according to thelikelihood principle, in either case the inference should be the same. Suppose a number of scientists are assessing the probability of a certain outcome (which we shall call 'success') in experimental trials. Conventional wisdom suggests that if there is no bias towards success or failure then the success probability would be one half. Adam, a scientist, conducted 12 trials and obtains 3 successes and 9 failures.One of those successes was the 12th and last observation.Then Adam left the lab. Bill, a colleague in the same lab, continued Adam's work and published Adam's results, along with a significance test. He tested thenull hypothesisthatp, the success probability, is equal to a half, versusp< 0.5. If we ignore the information that the third success was the 12th and last observation, the probability of the observed result that out of 12 trials 3 or something fewer (i.e. more extreme) were successes, ifH0is true, is which is⁠299/4096⁠= 7.3%. Thus the null hypothesis is not rejected at the 5% significance level if we ignore the knowledge that the third success was the 12th result. However observe that this first calculation also includes 12 token long sequences that end in tails contrary to the problem statement! If we redo this calculation we realize the likelihood according to the null hypothesis must be the probability of a fair coin landing 2 or fewer heads on 11 trials multiplied with the probability of the fair coin landing a head for the 12th trial: which is⁠67/2048⁠⁠1/2⁠=⁠67/4096⁠= 1.64%. Now the resultisstatistically significant at the5%level. Charlotte, another scientist, reads Bill's paper and writes a letter, saying that it is possible that Adam kept trying until he obtained 3 successes, in which case the probability of needing to conduct 12 or more experiments is given by which is⁠134/4096⁠⁠1/2⁠= 1.64%. Now the resultisstatistically significant at the5%level. Note that there is no contradiction between the latter two correct analyses; both computations are correct, and result in the same p-value. To these scientists, whether a result is significant or not does not depend on the design of the experiment, but does on the likelihood (in the sense of the likelihood function) of the parameter value being⁠1/2⁠. Results of this kind are considered by some as arguments against the likelihood principle. For others it exemplifies the value of the likelihood principle and is an argument against significance tests. Similar themes appear when comparingFisher's exact testwithPearson's chi-squared test. An argument in favor of the likelihood principle is given by Edwards in his bookLikelihood. He cites the following story from J.W. Pratt, slightly condensed here. Note that the likelihood function depends only on what actually happened, and not on whatcouldhave happened. This story can be translated to Adam's stopping rule above, as follows: Adam stopped immediately after 3 successes, because his boss Bill had instructed him to do so. After the publication of the statistical analysis by Bill, Adam realizes that he has missed a later instruction from Bill to instead conduct 12 trials, and that Bill's paper is based on this second instruction. Adam is very glad that he got his 3 successes after exactly 12 trials, and explains to his friend Charlotte that by coincidence he executed the second instruction. Later, Adam is astonished to hear about Charlotte's letter, explaining thatnowthe result is significant.
https://en.wikipedia.org/wiki/Likelihood_principle
Instatistics, thelikelihood-ratio testis ahypothesis testthat involves comparing thegoodness of fitof two competingstatistical models, typically one found bymaximizationover the entireparameter spaceand another found after imposing someconstraint, based on the ratio of theirlikelihoods. If the more constrained model (i.e., thenull hypothesis) is supported by theobserved data, the two likelihoods should not differ by more thansampling error.[1]Thus the likelihood-ratio test tests whether this ratio issignificantly differentfrom one, or equivalently whether itsnatural logarithmis significantly different from zero. The likelihood-ratio test, also known asWilks test,[2]is the oldest of the three classical approaches to hypothesis testing, together with theLagrange multiplier testand theWald test.[3]In fact, the latter two can be conceptualized as approximations to the likelihood-ratio test, and are asymptotically equivalent.[4][5][6]In the case of comparing two models each of which has no unknownparameters, use of the likelihood-ratio test can be justified by theNeyman–Pearson lemma. The lemma demonstrates that the test has the highestpoweramong all competitors.[7] Suppose that we have astatistical modelwithparameter spaceΘ{\displaystyle \Theta }. Anull hypothesisis often stated by saying that the parameterθ{\displaystyle \theta }lies in a specified subsetΘ0{\displaystyle \Theta _{0}}ofΘ{\displaystyle \Theta }. Thealternative hypothesisis thus thatθ{\displaystyle \theta }lies in thecomplementofΘ0{\displaystyle \Theta _{0}}, i.e. inΘ∖Θ0{\displaystyle \Theta ~\backslash ~\Theta _{0}}, which is denoted byΘ0c{\displaystyle \Theta _{0}^{\text{c}}}. The likelihood ratio test statistic for the null hypothesisH0:θ∈Θ0{\displaystyle H_{0}\,:\,\theta \in \Theta _{0}}is given by:[8] where the quantity inside the brackets is called the likelihood ratio. Here, thesup{\displaystyle \sup }notation refers to thesupremum. As all likelihoods are positive, and as the constrained maximum cannot exceed the unconstrained maximum, the likelihood ratio isboundedbetween zero and one. Often the likelihood-ratio test statistic is expressed as a difference between thelog-likelihoods where is the logarithm of the maximized likelihood functionL{\displaystyle {\mathcal {L}}}, andℓ(θ0){\displaystyle \ell (\theta _{0})}is the maximal value in the special case that the null hypothesis is true (but not necessarily a value that maximizesL{\displaystyle {\mathcal {L}}}for the sampled data) and denote the respectivearguments of the maximaand the allowed ranges they're embedded in. Multiplying by −2 ensures mathematically that (byWilks' theorem)λLR{\displaystyle \lambda _{\text{LR}}}converges asymptotically to beingχ²-distributedif the null hypothesis happens to be true.[9]Thefinite-sample distributionsof likelihood-ratio statistics are generally unknown.[10] The likelihood-ratio test requires that the models benested– i.e. the more complex model can be transformed into the simpler model by imposing constraints on the former's parameters. Many common test statistics are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof: e.g. theZ-test, theF-test, theG-test, andPearson's chi-squared test; for an illustration with theone-samplet-test, see below. If the models are not nested, then instead of the likelihood-ratio test, there is a generalization of the test that can usually be used: for details, seerelative likelihood. A simple-vs.-simple hypothesis test has completely specified models under both the null hypothesis and the alternative hypothesis, which for convenience are written in terms of fixed values of a notional parameterθ{\displaystyle \theta }: In this case, under either hypothesis, the distribution of the data is fully specified: there are no unknown parameters to estimate. For this case, a variant of the likelihood-ratio test is available:[11][12] Some older references may use the reciprocal of the function above as the definition.[13]Thus, the likelihood ratio is small if the alternative model is better than the null model. The likelihood-ratio test provides the decision rule as follows: The valuesc{\displaystyle c}andq{\displaystyle q}are usually chosen to obtain a specifiedsignificance levelα{\displaystyle \alpha }, via the relation TheNeyman–Pearson lemmastates that this likelihood-ratio test is themost powerfulamong all levelα{\displaystyle \alpha }tests for this case.[7][12] The likelihood ratio is a function of the datax{\displaystyle x}; therefore, it is astatistic, although unusual in that the statistic's value depends on a parameter,θ{\displaystyle \theta }. The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability ofType I erroris considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true). Thenumeratorcorresponds to the likelihood of an observed outcome under thenull hypothesis. Thedenominatorcorresponds to the maximum likelihood of an observed outcome, varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator; so, the likelihood ratio is between 0 and 1. Low values of the likelihood ratio mean that the observed result was much less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and so the null hypothesis cannot be rejected. The following example is adapted and abridged fromStuart, Ord & Arnold (1999, §22.2). Suppose that we have a random sample, of sizen, from a population that is normally-distributed. Both the mean,μ, and the standard deviation,σ, of the population are unknown. We want to test whether the mean is equal to a given value,μ0. Thus, our null hypothesis isH0:μ=μ0and our alternative hypothesis isH1:μ≠μ0. The likelihood function is With some calculation (omitted here), it can then be shown that wheretis thet-statisticwithn− 1degrees of freedom. Hence we may use the known exact distribution oftn−1to draw inferences. If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to sustain or reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine.[citation needed] AssumingH0is true, there is a fundamental result bySamuel S. Wilks: As the sample sizen{\displaystyle n}approaches∞{\displaystyle \infty }, and if the null hypothesis lies strictly within the interior of the parameter space, the test statisticλLR{\displaystyle \lambda _{\text{LR}}}defined above will beasymptoticallychi-squared distributed(χ2{\displaystyle \chi ^{2}}) withdegrees of freedomequal to the difference in dimensionality ofΘ{\displaystyle \Theta }andΘ0{\displaystyle \Theta _{0}}.[14]This implies that for a great variety of hypotheses, we can calculate the likelihood ratioλ{\displaystyle \lambda }for the data and then compare the observedλLR{\displaystyle \lambda _{\text{LR}}}to theχ2{\displaystyle \chi ^{2}}value corresponding to a desiredstatistical significanceas anapproximatestatistical test. Other extensions exist.[which?]
https://en.wikipedia.org/wiki/Likelihood-ratio_test
Likelihoodist statisticsorlikelihoodismis an approach tostatisticsthat exclusively or primarily uses thelikelihood function. Likelihoodist statistics is a more minor school than the main approaches ofBayesian statisticsandfrequentist statistics, but has some adherents and applications. The central idea of likelihoodism is thelikelihood principle: data are interpreted asevidence, and the strength of the evidence is measured by the likelihood function. Beyond this, there are significant differences within likelihood approaches: "orthodox" likelihoodists consider dataonlyas evidence, and do not use it as the basis ofstatistical inference, while others make inferences based on likelihood, but without usingBayesian inferenceorfrequentist inference. Likelihoodism is thuscriticizedfor either not providing a basis for belief or action (if it fails to make inferences), or not satisfying the requirements of these other schools. The likelihood function is also used in Bayesian statistics and frequentist statistics, but they differ in how it is used. Some likelihoodists consider their use of likelihood as an alternative to other approaches, while others consider it complementary and compatible with other approaches; see§ Relation with other theories. While likelihoodism is a distinct approach to statistical inference, it can be related to or contrasted with other theories and methodologies in statistics. Here are some notable connections: While likelihood-based statistics have been widely used and have many advantages, they are not without criticism. Here are some common criticisms of likelihoodist statistics: Likelihoodism as a distinct school dates toEdwards (1972), which gives a systematic treatment of statistics, based on likelihood. This built on significant earlier work; seeDempster (1972)for a contemporary review. While comparing ratios of probabilities dates to early statistics and probability, notablyBayesian inferenceas developed byPierre-Simon Laplacefrom the late 1700s,likelihoodas a distinct concept is due to Ronald Fisher inFisher (1921). Likelihood played an important role in Fisher's statistics, but he developed and used many non-likelihood frequentist techniques as well. His late writings, notablyFisher (1955), emphasize likelihood more strongly, and can be considered a precursor to a systematic theory of likelihoodism. Thelikelihood principlewas proposed in 1962 by several authors, notablyBarnard, Jenkins & Winsten (1962),Birnbaum (1962), andSavage (1962), and followed by thelaw of likelihoodinHacking (1965); these laid the foundation for likelihoodism. SeeLikelihood principle § Historyfor early history. While Edwards's version of likelihoodism considered likelihood as only evidence, which was followed byRoyall (1997), others proposed inference based only on likelihood, notably as extensions of maximum likelihood estimation. Notable isJohn Nelder, who declared inNelder (1999, p. 264): At least once a year I hear someone at a meeting say that there are two modes of inference: frequentist and Bayesian. That this sort of nonsense should be so regularly propagated shows how much we have to do. To begin with there is a flourishing school of likelihood inference, to which I belong. Textbooks that take a likelihoodist approach include the following:Kalbfleisch (1985),Azzalini (1996),Pawitan (2001),Rohde (2014), andHeld & Sabanés Bové (2014). A collection of relevant papers is given byTaper & Lele (2004).
https://en.wikipedia.org/wiki/Likelihoodist_statistics
Theprinciple of maximum entropystates that theprobability distributionwhich best represents the current state of knowledge about a system is the one with largestentropy, in the context of precisely stated prior data (such as apropositionthat expressestestable information). Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. According to this principle, the distribution with maximalinformation entropyis the best choice. The principle was first expounded byE. T. Jaynesin two papers in 1957,[1][2]where he emphasized a natural correspondence betweenstatistical mechanicsandinformation theory. In particular, Jaynes argued that the Gibbsian method of statistical mechanics is sound by also arguing that theentropyof statistical mechanics and theinformation entropyofinformation theoryare the same concept. Consequently,statistical mechanicsshould be considered a particular application of a general tool of logicalinferenceand information theory. In most practical cases, the stated prior data or testable information is given by a set ofconserved quantities(average values of some moment functions), associated with theprobability distributionin question. This is the way the maximum entropy principle is most often used instatistical thermodynamics. Another possibility is to prescribe somesymmetriesof the probability distribution. The equivalence betweenconserved quantitiesand correspondingsymmetry groupsimplies a similar equivalence for these two ways of specifying the testable information in the maximum entropy method. The maximum entropy principle is also needed to guarantee the uniqueness and consistency of probability assignments obtained by different methods,statistical mechanicsandlogical inferencein particular. The maximum entropy principle makes explicit our freedom in using different forms ofprior data. As a special case, a uniformprior probabilitydensity (Laplace'sprinciple of indifference, sometimes called the principle of insufficient reason), may be adopted. Thus, the maximum entropy principle is not merely an alternative way to view the usual methods of inference of classical statistics, but represents a significant conceptual generalization of those methods. However these statements do not imply that thermodynamical systems need not be shown to beergodicto justify treatment as astatistical ensemble. In ordinary language, the principle of maximum entropy can be said to express a claim of epistemic modesty, or of maximum ignorance. The selected distribution is the one that makes the least claim to being informed beyond the stated prior data, that is to say the one that admits the most ignorance beyond the stated prior data. The principle of maximum entropy is useful explicitly only when applied totestable information. Testable information is a statement about a probability distribution whose truth or falsity is well-defined. For example, the statements and (wherep2{\displaystyle p_{2}}andp3{\displaystyle p_{3}}are probabilities of events) are statements of testable information. Given testable information, the maximum entropy procedure consists of seeking theprobability distributionwhich maximizesinformation entropy, subject to the constraints of the information. This constrained optimization problem is typically solved using the method ofLagrange multipliers.[3] Entropy maximization with no testable information respects the universal "constraint" that the sum of the probabilities is one. Under this constraint, the maximum entropy discrete probability distribution is theuniform distribution, The principle of maximum entropy is commonly applied in two ways to inferential problems: The principle of maximum entropy is often used to obtainprior probability distributionsforBayesian inference. Jaynes was a strong advocate of this approach, claiming the maximum entropy distribution represented the least informative distribution.[4]A large amount of literature is now dedicated to the elicitation of maximum entropy priors and links withchannel coding.[5][6][7][8] Maximum entropy is a sufficient updating rule forradical probabilism.Richard Jeffrey'sprobability kinematicsis a special case ofmaximum entropy inference. However, maximum entropy is not a generalisation of all such sufficient updating rules.[9] Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information. Such models are widely used innatural language processing. An example of such a model islogistic regression, which corresponds to themaximum entropy classifierfor independent observations. One of the main applications of the maximum entropy principle is in discrete and continuousdensity estimation.[10][11]Similar tosupport vector machineestimators, the maximum entropy principle may require the solution to aquadratic programmingproblem, and thus provide a sparse mixture model as the optimal density estimator. One important advantage of the method is its ability to incorporate prior information in the density estimation.[12] We have some testable informationIabout a quantityxtaking values in {x1,x2,...,xn}. We assume this information has the form ofmconstraints on the expectations of the functionsfk; that is, we require our probability distribution to satisfy the moment inequality/equality constraints: where theFk{\displaystyle F_{k}}are observables. We also require the probability density to sum to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint The probability distribution with maximum information entropy subject to these inequality/equality constraints is of the form:[10] for someλ1,…,λm{\displaystyle \lambda _{1},\ldots ,\lambda _{m}}. It is sometimes called theGibbs distribution. The normalization constant is determined by: and is conventionally called thepartition function. (ThePitman–Koopman theoremstates that the necessary and sufficient condition for a sampling distribution to admitsufficient statisticsof bounded dimension is that it have the general form of a maximum entropy distribution.) The λkparameters are Lagrange multipliers. In the case of equality constraints their values are determined from the solution of the nonlinear equations In the case of inequality constraints, the Lagrange multipliers are determined from the solution of aconvex optimizationprogram with linear constraints.[10]In both cases, there is noclosed form solution, and the computation of the Lagrange multipliers usually requiresnumerical methods. Forcontinuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. InsteadEdwin Jaynes(1963, 1968, 2003) gave the following formula, which is closely related to therelative entropy(see alsodifferential entropy). whereq(x), which Jaynes called the "invariant measure", is proportional to thelimiting density of discrete points. For now, we shall assume thatqis known; we will discuss it further after the solution equations are given. A closely related quantity, the relative entropy, is usually defined as theKullback–Leibler divergenceofpfromq(although it is sometimes, confusingly, defined as the negative of this). The inference principle of minimizing this, due to Kullback, is known as thePrinciple of Minimum Discrimination Information. We have some testable informationIabout a quantityxwhich takes values in someintervalof thereal numbers(all integrals below are over this interval). We assume this information has the form ofmconstraints on the expectations of the functionsfk, i.e. we require our probability density function to satisfy the inequality (or purely equality) moment constraints: where theFk{\displaystyle F_{k}}are observables. We also require the probability density to integrate to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint The probability density function with maximumHcsubject to these constraints is:[11] with thepartition functiondetermined by As in the discrete case, in the case where all moment constraints are equalities, the values of theλk{\displaystyle \lambda _{k}}parameters are determined by the system of nonlinear equations: In the case with inequality moment constraints the Lagrange multipliers are determined from the solution of aconvex optimizationprogram.[11] The invariant measure functionq(x) can be best understood by supposing thatxis known to take values only in thebounded interval(a,b), and that no other information is given. Then the maximum entropy probability density function is whereAis a normalization constant. The invariant measure function is actually the prior density function encoding 'lack of relevant information'. It cannot be determined by the principle of maximum entropy, and must be determined by some other logical method, such as theprinciple of transformation groupsormarginalization theory. For several examples of maximum entropy distributions, see the article onmaximum entropy probability distributions. Proponents of the principle of maximum entropy justify its use in assigning probabilities in several ways, including the following two arguments. These arguments take the use ofBayesian probabilityas given, and are thus subject to the same postulates. Consider adiscrete probability distributionamongm{\displaystyle m}mutually exclusivepropositions. The most informative distribution would occur when one of the propositions was known to be true. In that case, the information entropy would be equal to zero. The least informative distribution would occur when there is no reason to favor any one of the propositions over the others. In that case, the only reasonable probability distribution would be uniform, and then the information entropy would be equal to its maximum possible value,log⁡m{\displaystyle \log m}. The information entropy can therefore be seen as a numerical measure which describes how uninformative a particular probability distribution is, ranging from zero (completely informative) tolog⁡m{\displaystyle \log m}(completely uninformative). By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess. Thus the maximum entropy distribution is the only reasonable distribution. Thedependence of the solutionon the dominating measure represented bym(x){\displaystyle m(x)}is however a source of criticisms of the approach since this dominating measure is in fact arbitrary.[13] The following argument is the result of a suggestion made byGraham Wallisto E. T. Jaynes in 1962.[14]It is essentially the same mathematical argument used for theMaxwell–Boltzmann statisticsinstatistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept. The information entropy function is not assumeda priori, but rather is found in the course of the argument; and the argument leads naturally to the procedure of maximizing the information entropy, rather than treating it in some other way. Suppose an individual wishes to make a probability assignment amongm{\displaystyle m}mutually exclusivepropositions. They have some testable information, but are not sure how to go about including this information in their probability assessment. They therefore conceive of the following random experiment. They will distributeN{\displaystyle N}quanta of probability (each worth1/N{\displaystyle 1/N}) at random among them{\displaystyle m}possibilities. (One might imagine that they will throwN{\displaystyle N}balls intom{\displaystyle m}buckets while blindfolded. In order to be as fair as possible, each throw is to be independent of any other, and every bucket is to be the same size.) Once the experiment is done, they will check if the probability assignment thus obtained is consistent with their information. (For this step to be successful, the information must be a constraint given by an open set in the space of probability measures). If it is inconsistent, they will reject it and try again. If it is consistent, their assessment will be wherepi{\displaystyle p_{i}}is the probability of thei{\displaystyle i}thproposition, whileniis the number of quanta that were assigned to thei{\displaystyle i}thproposition (i.e. the number of balls that ended up in bucketi{\displaystyle i}). Now, in order to reduce the 'graininess' of the probability assignment, it will be necessary to use quite a large number of quanta of probability. Rather than actually carry out, and possibly have to repeat, the rather long random experiment, the protagonist decides to simply calculate and use the most probable result. The probability of any particular result is themultinomial distribution, where is sometimes known as the multiplicity of the outcome. The most probable result is the one which maximizes the multiplicityW{\displaystyle W}. Rather than maximizingW{\displaystyle W}directly, the protagonist could equivalently maximize any monotonic increasing function ofW{\displaystyle W}. They decide to maximize At this point, in order to simplify the expression, the protagonist takes the limit asN→∞{\displaystyle N\to \infty }, i.e. as the probability levels go from grainy discrete values to smooth continuous values. UsingStirling's approximation, they find All that remains for the protagonist to do is to maximize entropy under the constraints of their testable information. They have found that the maximum entropy distribution is the most probable of all "fair" random distributions, in the limit as the probability levels go from discrete to continuous. Giffin and Caticha (2007) state thatBayes' theoremand the principle of maximum entropy are completely compatible and can be seen as special cases of the "method of maximum relative entropy". They state that this method reproduces every aspect of orthodox Bayesian inference methods. In addition this new method opens the door to tackling problems that could not be addressed by either the maximal entropy principle or orthodox Bayesian methods individually. Moreover, recent contributions (Lazar 2003, and Schennach 2005) show that frequentist relative-entropy-based inference approaches (such asempirical likelihoodandexponentially tilted empirical likelihood– see e.g. Owen 2001 and Kitamura 2006) can be combined with prior information to perform Bayesian posterior analysis. Jaynes stated Bayes' theorem was a way to calculate a probability, while maximum entropy was a way to assign a prior probability distribution.[15] It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using theprinciple of minimum cross-entropy(or the Principle of Maximum Entropy being a special case of using auniform distributionas the given prior), independently of any Bayesian considerations by treating the problem formally as a constrained optimisation problem, the Entropy functional being the objective function. For the case of given average values as testable information (averaged over the sought after probability distribution), the sought after distribution is formally theGibbs (or Boltzmann) distributionthe parameters of which must be solved for in order to achieve minimum cross entropy and satisfy the given testable information. The principle of maximum entropy bears a relation to a key assumption ofkinetic theory of gasesknown asmolecular chaosorStosszahlansatz. This asserts that the distribution function characterizing particles entering a collision can be factorized. Though this statement can be understood as a strictly physical hypothesis, it can also be interpreted as a heuristic hypothesis regarding the most probable configuration of particles before colliding.[16]
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy
Instatistical theory, apseudolikelihoodis anapproximationto thejoint probability distributionof a collection ofrandom variables. The practical use of this is that it can provide an approximation to thelikelihood functionof a set of observed data which may either provide a computationally simpler problem forestimation, or may provide a way of obtaining explicit estimates of model parameters. The pseudolikelihood approach was introduced byJulian Besag[1]in the context of analysing data havingspatial dependence. Given a set of random variablesX=X1,X2,…,Xn{\displaystyle X=X_{1},X_{2},\ldots ,X_{n}}the pseudolikelihood ofX=x=(x1,x2,…,xn){\displaystyle X=x=(x_{1},x_{2},\ldots ,x_{n})}is in discrete case and in continuous one. HereX{\displaystyle X}is a vector of variables,x{\displaystyle x}is a vector of values,pθ(⋅∣⋅){\displaystyle p_{\theta }(\cdot \mid \cdot )}is conditional density andθ=(θ1,…,θp){\displaystyle \theta =(\theta _{1},\ldots ,\theta _{p})}is the vector of parameters we are to estimate. The expressionX=x{\displaystyle X=x}above means that each variableXi{\displaystyle X_{i}}in the vectorX{\displaystyle X}has a corresponding valuexi{\displaystyle x_{i}}in the vectorx{\displaystyle x}andx−i=(x1,…,x^i,…,xn){\displaystyle x_{-i}=(x_{1},\ldots ,{\hat {x}}_{i},\ldots ,x_{n})}means that the coordinatexi{\displaystyle x_{i}}has been omitted. The expressionPrθ(X=x){\displaystyle \mathrm {Pr} _{\theta }(X=x)}is the probability that the vector of variablesX{\displaystyle X}has values equal to the vectorx{\displaystyle x}. This probability of course depends on the unknown parameterθ{\displaystyle \theta }. Because situations can often be described using state variables ranging over a set of possible values, the expressionPrθ(X=x){\displaystyle \mathrm {Pr} _{\theta }(X=x)}can therefore represent the probability of a certain state among all possible states allowed by the state variables. Thepseudo-log-likelihoodis a similar measure derived from the above expression, namely (in discrete case) One use of the pseudolikelihood measure is as an approximation for inference about aMarkovorBayesian network, as the pseudolikelihood of an assignment toXi{\displaystyle X_{i}}may often be computed more efficiently than the likelihood, particularly when the latter may require marginalization over a large number of variables. Use of the pseudolikelihood in place of the true likelihood function in amaximum likelihoodanalysis can lead to good estimates, but a straightforward application of the usual likelihood techniques to derive information about estimation uncertainty, or forsignificance testing, would in general be incorrect.[2]
https://en.wikipedia.org/wiki/Pseudolikelihood
Instatistics, thescore(orinformant[1]) is thegradientof thelog-likelihood functionwith respect to theparameter vector. Evaluated at a particular value of the parameter vector, the score indicates thesteepnessof the log-likelihood function and thereby the sensitivity toinfinitesimalchanges to the parameter values. If the log-likelihood function iscontinuousover theparameter space, the score willvanishat a localmaximum or minimum; this fact is used inmaximum likelihood estimationto find the parameter values that maximize the likelihood function. Since the score is a function of theobservations, which are subject tosampling error, it lends itself to atest statisticknown asscore testin which the parameter is held at a particular value. Further, theratio of two likelihood functionsevaluated at two distinct parameter values can be understood as adefinite integralof the score function.[2] The score is thegradient(the vector ofpartial derivatives) oflog⁡L(θ;x){\displaystyle \log {\mathcal {L}}(\theta ;x)}, thenatural logarithmof thelikelihood function, with respect to anm-dimensional parameter vectorθ{\displaystyle \theta }. This differentiation yields a(1×m){\displaystyle (1\times m)}row vector at each value ofθ{\displaystyle \theta }andx{\displaystyle x}, and indicates the sensitivity of the likelihood (its derivative normalized by its value). In older literature,[citation needed]"linear score" may refer to the score with respect to infinitesimal translation of a given density. This convention arises from a time when the primary parameter of interest was the mean or median of a distribution. In this case, the likelihood of an observation is given by a density of the form[clarification needed]L(θ;X)=f(X+θ){\displaystyle {\mathcal {L}}(\theta ;X)=f(X+\theta )}. The "linear score" is then defined as While the score is a function ofθ{\displaystyle \theta }, it also depends on the observationsx=(x1,x2,…,xT){\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{T})}at which the likelihood function is evaluated, and in view of the random character of sampling one may take itsexpected valueover thesample space. Under certain regularity conditions on the density functions of the random variables,[3][4]the expected value of the score, evaluated at the true parameter valueθ{\displaystyle \theta }, is zero. To see this, rewrite the likelihood functionL{\displaystyle {\mathcal {L}}}as aprobability density functionL(θ;x)=f(x;θ){\displaystyle {\mathcal {L}}(\theta ;x)=f(x;\theta )}, and denote thesample spaceX{\displaystyle {\mathcal {X}}}. Then: The assumed regularity conditions allow the interchange of derivative and integral (seeLeibniz integral rule), hence the above expression may be rewritten as[clarification needed] It is worth restating the above result in words: the expected value of the score, at true parameter valueθ{\displaystyle \theta }is zero. Thus, if one were to repeatedly sample from some distribution, and repeatedly calculate the score, then the mean value of the scores would tend to zeroasymptotically. Thevarianceof the score,Var⁡(s(θ))=E⁡(s(θ)s(θ)T){\displaystyle \operatorname {Var} (s(\theta ))=\operatorname {E} (s(\theta )s(\theta )^{\mathsf {T}})}, can be derived from the above expression for the expected value. Hence the variance of the score is equal to the negative expected value of theHessian matrixof the log-likelihood.[5] The latter is known as theFisher informationand is writtenI(θ){\displaystyle {\mathcal {I}}(\theta )}. Note that the Fisher information is not a function of any particular observation, as the random variableX{\displaystyle X}has been averaged out. This concept of information is useful when comparing two methods of observation of somerandom process. Consider observing the firstntrials of aBernoulli process, and seeing thatAof them are successes and the remainingBare failures, where the probability of success isθ. Then the likelihoodL{\displaystyle {\mathcal {L}}}is so the scoresis We can now verify that the expectation of the score is zero. Noting that the expectation ofAisnθand the expectation ofBisn(1 −θ) [recall thatAandBare random variables], we can see that the expectation ofsis We can also check the variance ofs{\displaystyle s}. We know thatA+B=n(soB=n−A) and the variance ofAisnθ(1 −θ) so the variance ofsis Formodels with binary outcomes(Y= 1 or 0), the model can be scored with the logarithm of predictions wherepis the probability in the model to be estimated andSis the score.[6] The scoring algorithm is an iterative method fornumericallydetermining themaximum likelihoodestimator. Note thats{\displaystyle s}is a function ofθ{\displaystyle \theta }and the observationx=(x1,x2,…,xT){\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{T})}, so that, in general, it is not astatistic. However, in certain applications, such as thescore test, the score is evaluated at a specific value ofθ{\displaystyle \theta }(such as a null-hypothesis value), in which case the result is a statistic. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more thansampling error. In 1948,C. R. Raofirst proved that the square of the score divided by the information matrix follows an asymptoticχ2-distributionunder the null hypothesis.[7] Further note that thelikelihood-ratio testis given by which means that the likelihood-ratio test can be understood as the area under the score function betweenθ0{\displaystyle \theta _{0}}andθ^{\displaystyle {\hat {\theta }}}.[8] The term "score function" may initially seem unrelated to its contemporary meaning, which centers around the derivative of the log-likelihood function in statistical models. This apparent discrepancy can be traced back to the term's historical origins. The concept of the "score function" was first introduced by British statisticianRonald Fisherin his 1935 paper titled "The Detection of Linkage with 'Dominant' Abnormalities."[9]Fisher employed the term in the context of genetic analysis, specifically for families where a parent had a dominant genetic abnormality. Over time, the application and meaning of the "score function" have evolved, diverging from its original context but retaining its foundational principles.[10][11] Fisher's initial use of the term was in the context of analyzing genetic attributes in families with a parent possessing a genetic abnormality. He categorized the children of such parents into four classes based on two binary traits: whether they had inherited the abnormality or not, and theirzygositystatus as either homozygous or heterozygous. Fisher devised a method to assign each family a "score," calculated based on the number of children falling into each of the four categories. This score was used to estimate what he referred to as the "linkage parameter," which described the probability of the genetic abnormality being inherited. Fisher evaluated the efficacy of his scoring rule by comparing it with an alternative rule and against what he termed the "ideal score." The ideal score was defined as the derivative of the logarithm of the sampling density, as mentioned on page 193 of his work.[9] The term "score" later evolved through subsequent research, notably expanding beyond the specific application in genetics that Fisher had initially addressed. Various authors adapted Fisher's original methodology to more generalized statistical contexts. In these broader applications, the term "score" or "efficient score" started to refer more commonly to the derivative of the log-likelihood function of the statistical model in question. This conceptual expansion was significantly influenced by a 1948 paper by C. R. Rao, which introduced "efficient score tests" that employed the derivative of the log-likelihood function.[12] Thus, what began as a specialized term in the realm of genetic statistics has evolved to become a fundamental concept in broader statistical theory, often associated with the derivative of the log-likelihood function.
https://en.wikipedia.org/wiki/Score_(statistics)
In common usage,randomnessis the apparent or actual lack of definitepatternorpredictabilityin information.[1][2]A random sequence of events,symbolsor steps often has noorderand does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a knownprobability distribution, the frequency of different outcomes over repeated events (or "trials") is predictable.[note 1]For example, when throwing twodice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance,probability, andinformation entropy. The fields of mathematics, probability, and statistics use formal definitions of randomness, typically assuming that there is some 'objective' probability distribution. In statistics, arandom variableis an assignment of a numerical value to each possible outcome of anevent space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear inrandom sequences. Arandom processis a sequence of random variables whose outcomes do not follow adeterministicpattern, but follow an evolution described byprobability distributions. These and other constructs are extremely useful inprobability theoryand the variousapplications of randomness. Randomness is most often used instatisticsto signify well-defined statistical properties.Monte Carlo methods, which rely on random input (such as fromrandom number generatorsorpseudorandom number generators), are important techniques in science, particularly in the field ofcomputational science.[3]By analogy,quasi-Monte Carlo methodsusequasi-random number generators. Random selection, when narrowly associated with asimple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random.[2] According toRamsey theory, pure randomness (in the sense of there being no discernible pattern) is impossible, especially for large structures. MathematicianTheodore Motzkinsuggested that "while disorder is more probable in general, complete disorder is impossible".[4]Misunderstanding this can lead to numerousconspiracy theories.[5]Cristian S. Caludestated that "given the impossibility of true randomness, the effort is directed towards studying degrees of randomness".[6]It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness.[6] In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threwdiceto determine fate, and this later evolved into games of chance. Most ancient cultures used various methods ofdivinationto attempt to circumvent randomness and fate.[7][8]Beyondreligionandgames of chance, randomness has been attested forsortitionsince at least ancientAthenian democracyin the form of akleroterion.[9] The formalization of odds and chance was perhaps earliest done by the Chinese of 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention ofcalculushad a positive impact on the formal study of randomness. In the 1888 edition of his bookThe Logic of Chance,John Vennwrote a chapter onThe conception of randomnessthat included his view of the randomness of the digits ofpi(π), by using them to construct arandom walkin two dimensions.[10] The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid-to-late-20th century, ideas ofalgorithmic information theoryintroduced new dimensions to the field via the concept ofalgorithmic randomness. Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that thedeliberateintroduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, suchrandomized algorithmseven outperform the best deterministic methods.[11] Many scientific fields are concerned with randomness: In the 19th century, scientists used the idea of random motions of molecules in the development ofstatistical mechanicsto explain phenomena inthermodynamicsandthe properties of gases. According to several standard interpretations ofquantum mechanics, microscopic phenomena are objectively random.[12]That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstableatomis placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time.[13]Thus, quantum mechanics does not specify the outcome of individual experiments, but only the probabilities.Hidden variable theoriesreject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case. Themodern evolutionary synthesisascribes the observed diversity of life to random geneticmutationsfollowed bynatural selection. The latter retains some random mutations in thegene pooldue to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them. The location of the mutation is not entirely random however as e.g. biologically important regions may be more protected from mutations.[14][15][16] Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities.[17][18] The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment), and to some extent randomly. For example, thedensityoffrecklesthat appear on a person's skin is controlled by genes and exposure to light; whereas the exact location ofindividualfreckles seems random.[19] As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories. The mathematical theory ofprobabilityarose from attempts to formulate mathematical descriptions of chance events, originally in the context ofgambling, but later in connection with physics.Statisticsis used to infer an underlyingprobability distributionof a collection of empirical observations. For the purposes ofsimulation, it is necessary to have a large supply ofrandom numbers—or means to generate them on demand. Algorithmic information theorystudies, among other topics, what constitutes arandom sequence. The central idea is that a string ofbitsis random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness), which means that random strings are those that cannot becompressed. Pioneers of this field includeAndrey Kolmogorovand his studentPer Martin-Löf,Ray Solomonoff, andGregory Chaitin. For the notion of infinite sequence, mathematicians generally acceptPer Martin-Löf's semi-eponymous definition: An infinite sequence is random if and only if it withstands all recursively enumerable null sets.[20]The other notions of random sequences include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales. It was shown byYongge Wangthat these randomness notions are generally different.[21] Randomness occurs in numbers such aslog(2)andpi. The decimal digits of pi constitute an infinite sequence and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to benormal: Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0 through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality even in base 10, much less normality in other number bases.[22] In statistics, randomness is commonly used to createsimple random samples. This allows surveys of completely random groups of people to provide realistic data that is reflective of the population. Common methods of doing this include drawing names out of a hat or using a random digit chart (a large table of random digits). In information science, irrelevant or meaningless data is considered noise. Noise consists of numerous transient disturbances, with a statistically randomized time distribution. Incommunication theory, randomness in a signal is called "noise", and is opposed to that component of its variation that is causally attributable to the source, the signal. In terms of the development of random networks, for communication randomness rests on the two simple assumptions ofPaul ErdősandAlfréd Rényi, who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.[clarification needed][23] Therandom walk hypothesisconsiders that asset prices in an organizedmarketevolve at random, in the sense that the expected value of their change is zero but the actual value may turn out to be positive or negative. More generally, asset prices are influenced by a variety of unpredictable events in the general economic environment. Random selection can be an official method to resolvetiedelections in some jurisdictions.[24]Its use in politics originates long ago. Many offices inancient Athenswere chosen by lot instead of modern voting. Randomness can be seen as conflicting with thedeterministicideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition toevolution, which states thatnon-randomselection is applied to the results of random genetic variation. HinduandBuddhistphilosophies state that any event is the result of previous events, as is reflected in the concept ofkarma. As such, this conception is at odds with the idea of randomness, and any reconciliation between both of them would require an explanation.[25] In some religious contexts, procedures that are commonly perceived as randomizers are used for divination.Cleromancyuses the casting of bones or dice to reveal what is seen as the will of the gods. In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias. Politics:Athenian democracywas based on the concept ofisonomia(equality of political rights), and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated.Allotmentis now restricted to selecting jurors in Anglo-Saxon legal systems, and in situations where "fairness" is approximated byrandomization, such as selectingjurorsand militarydraftlotteries. Games: Random numbers were first investigated in the context ofgambling, and many randomizing devices, such asdice,shuffling playing cards, androulettewheels, were first developed for use in gambling. The ability to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create them are usually regulated by governmentGaming Control Boards. Random drawings are also used to determinelotterywinners. In fact, randomness has been used for games of chance throughout history, and to select out individuals for an unwanted task in a fair way (seedrawing straws). Sports: Some sports, includingAmerican football, usecoin tossesto randomly select starting conditions for games orseedtied teams forpostseason play. TheNational Basketball Associationuses a weightedlotteryto order teams in its draft. Mathematics: Random numbers are also employed where their use is mathematically important, such as sampling foropinion pollsand for statistical sampling inquality controlsystems. Computational solutions for some types of problems use random numbers extensively, such as in theMonte Carlo methodand ingenetic algorithms. Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials (e.g.,randomized controlled trials). Religion: Although not intended to be random, various forms ofdivinationsuch ascleromancysee what appears to be a random event as a means for a divine being to communicate their will (see alsoFree willandDeterminismfor more). It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems: The manyapplications of randomnesshave led to many different methods for generating random data. These methods may vary as to how unpredictable orstatistically randomthey are, and how quickly they can generate random numbers. Before the advent of computationalrandom number generators, generating large amounts of sufficiently random numbers (which is important in statistics) required a lot of work. Results would sometimes be collected and distributed asrandom number tables. There are many practical measures of randomness for a binary sequence. These include measures based on frequency,discrete transforms,complexity, or a mixture of these, such as the tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.[26] Quantum nonlocalityhas been used to certify the presence of genuine or strong form of randomness in a given string of numbers.[27] Popular perceptions of randomness are frequently mistaken, and are often based on fallacious reasoning or intuitions. This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system where numbers that come up are removed from the system, such as whenplaying cardsare drawn and not returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where objects are selected independently, and none are removed after each event, such as the roll of a die, a coin toss, or mostlotterynumber selection schemes. Truly random processes such as these do not have memory, which makes it impossible for past outcomes to affect future outcomes. In fact, there is no finite number of trials that can guarantee a success. In a random sequence of numbers, a number may be said to be cursed because it has come up less often in the past, and so it is thought that it will occur less often in the future. A number may be assumed to be blessed because it has occurred more often than others in the past, and so it is thought likely to come up more often in the future. This logic is valid only if the randomisation might be biased, for example if a die is suspected to be loaded then its failure to roll enough sixes would be evidence of that loading. If the die is known to be fair, then previous rolls can give no indication of future events. In nature, events rarely occur with a frequency that is knowna priori, so observing outcomes to determine which events are more probable makes sense. However, it is fallacious to apply this logic to systems designed and known to make all outcomes equally likely, such as shuffled cards, dice, and roulette wheels. In the beginning of a scenario, one might calculate the probability of a certain event. However, as soon as one gains more information about the scenario, one may need to re-calculate the probability accordingly. For example, when being told that a woman has two children, one might be interested in knowing if either of them is a girl, and if yes, the probability that the other child is also a girl. Considering the two events independently, one might expect that the probability that the other child is female is ½ (50%), but by building aprobability spaceillustrating all possible outcomes, one would notice that the probability is actually only ⅓ (33%). To be sure, the probability space does illustrate four ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But once it is known that at least one of the children is female, this rules out the boy-boy scenario, leaving only three ways of having the two children: boy-girl, girl-boy, girl-girl. From this, it can be seen only ⅓ of these scenarios would have the other child also be a girl[28](seeBoy or girl paradoxfor more). In general, by using a probability space, one is less likely to miss out on possible scenarios, or to neglect the importance of new information. This technique can be used to provide insights in other situations such as theMonty Hall problem, a game show scenario in which a car is hidden behind one of three doors, and two goats are hidden asbooby prizesbehind the others. Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating that door as an option. With only two doors left (one with the car, the other with another goat), the player must decide to either keep their decision, or to switch and select the other door. Intuitively, one might think the player is choosing between two doors with equal probability, and that the opportunity to choose another door makes no difference. However, an analysis of the probability spaces would reveal that the contestant has received new information, and that changing to the other door would increase their chances of winning.[28]
https://en.wikipedia.org/wiki/Randomness
A numericsequenceis said to bestatistically randomwhen it contains no recognizablepatternsor regularities; sequences such as the results of an idealdice rollor the digits ofπexhibit statistical randomness.[1] Statistical randomness does not necessarily imply "true"randomness, i.e., objectiveunpredictability.Pseudorandomnessis sufficient for many uses, such as statistics, hence the namestatisticalrandomness. Global randomnessandlocal randomnessare different. Most philosophical conceptions of randomness are global—because they are based on the idea that "in the long run" a sequence looks truly random, even if certain sub-sequences wouldnotlook random. In a "truly" random sequence of numbers of sufficient length, for example, it is probable there would be long sequences of nothing but repeating numbers, though on the whole the sequence might be random.Localrandomness refers to the idea that there can be minimum sequence lengths in which random distributions are approximated. Long stretches of the same numbers, even those generated by "truly" random processes, would diminish the "local randomness" of a sample (it might only be locally random for sequences of 10,000 numbers; taking sequences of less than 1,000 might not appear random at all, for example). A sequence exhibiting a pattern is not thereby proved not statistically random. According to principles ofRamsey theory, sufficiently large objects must necessarily contain a given substructure ("complete disorder is impossible"). Legislation concerninggamblingimposes certain standards of statistical randomness toslot machines. The first tests for random numbers were published byM.G. KendallandBernard Babington Smithin theJournal of the Royal Statistical Societyin 1938.[2]They were built on statistical tools such asPearson's chi-squared testthat were developed to distinguish whether experimental phenomena matched their theoretical probabilities. Pearson developed his test originally by showing that a number of dice experiments byW.F.R. Weldondid not display "random" behavior. Kendall and Smith's original four tests werehypothesis tests, which took as theirnull hypothesisthe idea that each number in a given random sequence had an equal chance of occurring, and that various other patterns in the data should be also distributed equiprobably. If a given sequence was able to pass all of these tests within a given degree of significance (generally 5%), then it was judged to be, in their words "locally random". Kendall and Smith differentiated "local randomness" from "true randomness" in that many sequences generated with truly randommethodsmight not display "local randomness" to a given degree —verylarge sequences might contain many rows of a single digit. This might be "random" on the scale of the entire sequence, but in a smaller block it would not be "random" (it would not pass their tests), and would be useless for a number of statistical applications. As random number sets became more and more common, more tests, of increasing sophistication were used. Some modern tests plot random digits as points on a three-dimensional plane, which can then be rotated to look for hidden patterns. In 1995, the statisticianGeorge Marsagliacreated a set of tests known as thediehard tests, which he distributes with aCD-ROMof 5 billionpseudorandomnumbers. In 2015,Yongge Wangdistributed a Java software package[3]for statistically distance based randomness testing. Pseudorandom number generatorsrequire tests as exclusive verifications for their "randomness," as they are decidedlynotproduced by "truly random" processes, but rather by deterministic algorithms. Over the history of random number generation, many sources of numbers thought to appear "random" under testing have later been discovered to be very non-random when subjected to certain types of tests. The notion ofquasi-randomnumbers was developed to circumvent some of these problems, though pseudorandom number generators are still extensively used in many applications (even ones known to be extremely "non-random"), as they are "good enough" for most applications. Other tests:
https://en.wikipedia.org/wiki/Statistical_randomness
Intuitively, analgorithmically random sequence(orrandom sequence) is asequenceof binary digits that appears random to any algorithm running on a (prefix-free or not)universal Turing machine. The notion can be applied analogously to sequences on any finite alphabet (e.g. decimal digits). Random sequences are key objects of study inalgorithmic information theory. Inmeasure-theoretic probability theory, introduced byAndrey Kolmogorovin 1933, there isno such thingas a random sequence. For example, consider flipping a fair coin infinitely many times. Any particular sequence, be it0000…{\displaystyle 0000\dots }or011010…{\displaystyle 011010\dots }, has equal probability of exactly zero. There is no way to state that one sequence is "more random" than another sequence, using the language of measure-theoretic probability. However, it is intuitively obvious that011010…{\displaystyle 011010\dots }looks more random than0000…{\displaystyle 0000\dots }. Algorithmic randomness theory formalizes this intuition. As different types of algorithms are sometimes considered, ranging from algorithms with specific bounds on their running time to algorithms which may ask questions of anoracle machine, there are different notions of randomness. The most common of these is known asMartin-Löf randomness(K-randomnessor1-randomness), but stronger and weaker forms of randomness also exist. When the term "algorithmically random" is used to refer to a particular single (finite or infinite) sequence without clarification, it is usually taken to mean "incompressible" or, in the case the sequence is infinite and prefix algorithmically random (i.e., K-incompressible), "Martin-Löf–Chaitin random". Since its inception, Martin-Löf randomness has been shown to admit many equivalent characterizations—in terms ofcompression, randomness tests, andgambling—that bear little outward resemblance to the original definition, but each of which satisfies our intuitive notion of properties that random sequences ought to have: random sequences should be incompressible, they should pass statistical tests for randomness, and it should be difficult to make moneybettingon them. The existence of these multiple definitions of Martin-Löf randomness, and the stability of these definitions under different models of computation, give evidence that Martin-Löf randomness is natural and not an accident of Martin-Löf's particular model. It is important to disambiguate between algorithmic randomness and stochastic randomness. Unlike algorithmic randomness, which is defined for computable (and thus deterministic) processes, stochastic randomness is usually said to be a property of a sequence that is a priori known to be generated by (or is the outcome of) anindependentidenticallydistributedequiprobablestochastic process. Because infinite sequences of binary digits can be identified with real numbers in the unit interval, random binary sequences are often called(algorithmically) random real numbers. Additionally, infinite binary sequences correspond to characteristic functions of sets of natural numbers; therefore those sequences might be seen as sets of natural numbers. The class of all Martin-Löf random (binary) sequences is denoted by RAND or MLR. Richard von Misesformalized the notion of atest for randomnessin order to define a random sequence as one that passed all tests for randomness. He defined a "collective" (kollektiv) to be an infinite binary stringx1:∞{\displaystyle x_{1:\infty }}defined such that To pick out asubsequence, first pick a binary functionϕ{\displaystyle \phi }, such that given any binary stringx1:k{\displaystyle x_{1:k}}, it outputs either 0 or 1. If it outputs 1, then we addxk+1{\displaystyle x_{k+1}}to the subsequence, else we continue. In this definition, some admissible rules might abstain forever on some sequences, and thus fail to pick out an infinite subsequence. We only consider those that do pick an infinite subsequence. Stated in another way, each infinite binary string is a coin-flip game, and an admissible rule is a way for a gambler to decide when to place bets. A collective is a coin-flip game where there is no way for one gambler to do better than another over the long run. That is, there is no gambling system that works for the game. The definition generalizes from binary alphabet to countable alphabet: Usually the admissible rules are defined to be rules computable by a Turing machine, and we requirep=1/2{\displaystyle p=1/2}. With this, we have theMises–Wald–Church random sequences. This is not a restriction, since given a sequence withp=1/2{\displaystyle p=1/2}, we can construct random sequences with any other computablep∈(0,1){\displaystyle p\in (0,1)}.[1](Here, "Church" refers toAlonzo Church, whose 1940 paper proposed using Turing-computable rules.[2]) Theorem(Abraham Wald, 1936, 1937)[3]If there are only countably many admissible rules, then almost any sequence is a collective. Proof sketch:Use measure-theoretic probability. Fix one admissible rule. Sample a random sequence from Bernoulli space. With probability 1 (use martingales), the subsequence picked by the admissible rule still haslimn1n∑i=1nxmi=p{\displaystyle \lim _{n}{\frac {1}{n}}\sum _{i=1}^{n}x_{m_{i}}=p}. Now add all the countably many rules. With probability 1, each subsequence picked by each rule still haslimn1n∑i=1nxmi=p{\displaystyle \lim _{n}{\frac {1}{n}}\sum _{i=1}^{n}x_{m_{i}}=p}. However, this definition was found not to be strong enough. Intuitively, the long-time average of a random sequence should oscillate on both sides ofp{\displaystyle p}, like how arandom walkshould cross the origin infinitely many times. However,Jean Villeshowed that, even with countably many rules, there exists a binary sequence that tends towardsp{\displaystyle p}fraction of ones, but, for every finite prefix, the fraction of ones is less thanp{\displaystyle p}.[4] Ville's Construction(Jean Ville, 1939) There exists a collective with countably many admissible rules such that, for alln{\displaystyle n},1n∑k=1nxk≤p{\displaystyle {\frac {1}{n}}\sum _{k=1}^{n}x_{k}\leq p}.[5] The Ville construction suggests that the Mises–Wald–Church sense of randomness is not good enough, because some random sequences do not satisfy some laws of randomness. For example, the Ville construction does not satisfy one of thelaws of the iterated logarithm:lim supn→∞−∑k=1n(xk−1/2)2nlog⁡log⁡n≠1{\displaystyle \limsup _{n\to \infty }{\frac {-\sum _{k=1}^{n}(x_{k}-1/2)}{\sqrt {2n\log \log n}}}\neq 1}Naively, one can fix this by requiring a sequence to satisfy all possible laws of randomness, where a "law of randomness" is a property that is satisfied by all sequences with probability 1. However, for each infinite sequencey1:∞∈2N{\displaystyle y_{1:\infty }\in 2^{\mathbb {N} }}, we have a law of randomness thatx1:∞≠y1:∞{\displaystyle x_{1:\infty }\neq y_{1:\infty }}, leading to the conclusion that there are no random sequences. (Per Martin-Löf, 1966)[6]defined "Martin-Löf randomness" by only allowing laws of randomness that are Turing-computable. In other words, a sequence is random iff it passes all Turing-computable tests of randomness. The thesis that the definition of Martin-Löf randomness "correctly" captures the intuitive notion of randomness has been called theMartin-Löf–Chaitin Thesis; it is somewhat similar to theChurch–Turing thesis.[7] Martin-Löf–Chaitin Thesis.The mathematical concept of "Martin-Löf randomness" captures the intuitive notion of an infinite sequence being "random". Church–Turing thesis.The mathematical concept of "computable by Turing machines" captures the intuitive notion of a function being "computable". Like how Turing-computability has many equivalent definitions, Martin-Löf randomness also has many equivalent definitions. See next section. Martin-Löf's original definition of a random sequence was in terms of constructive null covers; he defined a sequence to be random if it is not contained in any such cover.Gregory Chaitin,Leonid LevinandClaus-Peter Schnorrproved a characterization in terms ofalgorithmic complexity: a sequence is random if there is a uniform bound on the compressibility of its initial segments. Schnorr gave a third equivalent definition in terms ofmartingales. Li and Vitanyi's bookAn Introduction to Kolmogorov Complexity and Its Applicationsis the standard introduction to these ideas. The Kolmogorov complexity characterization conveys the intuition that a random sequence is incompressible: no prefix can be produced by a program much shorter than the prefix. The null cover characterization conveys the intuition that a random real number should not have any property that is "uncommon". Each measure 0 set can be thought of as an uncommon property. It is not possible for a sequence to lie in no measure 0 sets, because each one-point set has measure 0. Martin-Löf's idea was to limit the definition to measure 0 sets that are effectively describable; the definition of an effective null cover determines a countable collection of effectively describable measure 0 sets and defines a sequence to be random if it does not lie in any of these particular measure 0 sets. Since the union of a countable collection of measure 0 sets has measure 0, this definition immediately leads to the theorem that there is a measure 1 set of random sequences. Note that if we identify the Cantor space of binary sequences with the interval [0,1] of real numbers, the measure on Cantor space agrees withLebesgue measure. Aneffective measure0setcan be interpreted as a Turing machine that is able to tell, given an infinite binary string, whether the string looks random at levels of statistical significance. The set is the intersection of shrinking setsU1⊃U2⊃U3⊃⋯{\displaystyle U_{1}\supset U_{2}\supset U_{3}\supset \cdots }, and since each setUn{\displaystyle U_{n}}is specified by an enumerable sequence of prefixes, given any infinite binary string, if it is inUn{\displaystyle U_{n}}, then the Turing machine can decide in finite time that the string does fall insideUn{\displaystyle U_{n}}. Therefore, it can "reject the hypothesis that the string is random at significance level2−n{\displaystyle 2^{-n}}". If the Turing machine can reject the hypothesis at all significance levels, then the string is not random. A random string is one that, for each Turing-computable test of randomness, manages to remain forever un-rejected at some significance level.[8] The martingale characterization conveys the intuition that no effective procedure should be able to make money betting against a random sequence. A martingaledis a betting strategy.dreads a finite stringwand bets money on the next bit. It bets some fraction of its money that the next bit will be 0, and then remainder of its money that the next bit will be 1.ddoubles the money it placed on the bit that actually occurred, and it loses the rest.d(w) is the amount of money it has after seeing the stringw. Since the bet placed after seeing the stringwcan be calculated from the valuesd(w),d(w0), andd(w1), calculating the amount of money it has is equivalent to calculating the bet. The martingale characterization says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarilycomputable) can make money betting on a random sequence. There is auniversalconstructive martingaled. This martingale is universal in the sense that, given any constructive martingaled, ifdsucceeds on a sequence, thendsucceeds on that sequence as well. Thus,dsucceeds on every sequence in RANDc(but, sincedis constructive, it succeeds on no sequence in RAND). (Schnorr 1971) There is a constructive null cover of RANDc. This means that all effective tests for randomness (that is, constructive null covers) are, in a sense, subsumed by thisuniversaltest for randomness, since any sequence that passes this single test for randomness will pass all tests for randomness. (Martin-Löf 1966) Intuitively, this universal test for randomness says "If the sequence has increasingly long prefixes that can be increasingly well-compressed on this universal Turing machine", then it is not random." -- see next section. Construction sketch:Enumerate the effective null covers as((Um,n)n)m{\displaystyle ((U_{m,n})_{n})_{m}}. The enumeration is also effective (enumerated by a modified universal Turing machine). Now we have a universal effective null cover by diagonalization:(∪nUn,n+k+1)k{\displaystyle (\cup _{n}U_{n,n+k+1})_{k}}. If a sequence fails an algorithmic randomness test, then it is algorithmically compressible. Conversely, if it is algorithmically compressible, then it fails an algorithmic randomness test. Construction sketch:Suppose the sequence fails a randomness test, then it can be compressed by lexicographically enumerating all sequences that fails the test, then code for the location of the sequence in the list of all such sequences. This is called "enumerative source encoding".[9] Conversely, if the sequence is compressible, then by the pigeonhole principle, only a vanishingly small fraction of sequences are like that, so we candefinea new test for randomness by "has a compression by this universal Turing machine". Incidentally, this is theuniversaltest for randomness. For example, consider a binary sequence sampled IID from the Bernoulli distribution. After taking a large numberN{\displaystyle N}of samples, we should have aboutM≈pN{\displaystyle M\approx pN}ones. We can code for this sequence as "Generate all binary sequences with lengthN{\displaystyle N}, andM{\displaystyle M}ones. Of those, thei{\displaystyle i}-th sequence in lexicographic order.". ByStirling approximation,log2⁡(NpN)≈NH(p){\displaystyle \log _{2}{\binom {N}{pN}}\approx NH(p)}whereH{\displaystyle H}is thebinary entropy function. Thus, the number of bits in this description is:2(1+ϵ)log2⁡N+(1+ϵ)NH(p)+O(1){\displaystyle 2(1+\epsilon )\log _{2}N+(1+\epsilon )NH(p)+O(1)}The first term is for prefix-coding the numbersN{\displaystyle N}andM{\displaystyle M}. The second term is for prefix-coding the numberi{\displaystyle i}. (UseElias omega coding.) The third term is for prefix-coding the rest of the description. WhenN{\displaystyle N}is large, this description has just∼H(p)N{\displaystyle \sim H(p)N}bits, and so it is compressible, with compression ratio∼H(p){\displaystyle \sim H(p)}. In particular, the compression ratio is exactly one (incompressible) only whenp=1/2{\displaystyle p=1/2}. (Example 14.2.8[10]) Consider a casino offering fair odds at a roulette table. The roulette table generates a sequence of random numbers. If this sequence is algorithmically random, then there is no lower semi-computable strategy to win, which in turn implies that there is no computable strategy to win. That is, for any gambling algorithm, the long-term log-payoff is zero (neither positive nor negative). Conversely, if this sequence is not algorithmically random, then there is a lower semi-computable strategy to win. As each of the equivalent definitions of a Martin-Löf random sequence is based on what is computable by some Turing machine, one can naturally ask what is computable by a Turingoracle machine. For a fixed oracleA, a sequenceBwhich is not only random but in fact, satisfies the equivalent definitions for computability relative toA(e.g., no martingale which is constructive relative to the oracleAsucceeds onB) is said to be random relative toA. Two sequences, while themselves random, may contain very similar information, and therefore neither will be random relative to the other. Any time there is aTuring reductionfrom one sequence to another, the second sequence cannot be random relative to the first, just as computable sequences are themselves nonrandom; in particular, this means thatChaitin's Ωis not random relative to thehalting problem. An important result relating to relative randomness isvan Lambalgen's theorem, which states that ifCis the sequence composed fromAandBby interleaving the first bit ofA, the first bit ofB, the second bit ofA, the second bit ofB, and so on, thenCis algorithmically random if and only ifAis algorithmically random, andBis algorithmically random relative toA. A closely related consequence is that ifAandBare both random themselves, thenAis random relative toBif and only ifBis random relative toA. Relative randomness gives us the first notion which is stronger than Martin-Löf randomness, which is randomness relative to some fixed oracleA. For any oracle, this is at least as strong, and for most oracles, it is strictly stronger, since there will be Martin-Löf random sequences which are not random relative to the oracleA. Important oracles often considered are the halting problem,∅′{\displaystyle \emptyset '}, and thenth jump oracle,∅(n){\displaystyle \emptyset ^{(n)}}, as these oracles are able to answer specific questions which naturally arise. A sequence which is random relative to the oracle∅(n−1){\displaystyle \emptyset ^{(n-1)}}is calledn-random; a sequence is 1-random, therefore, if and only if it is Martin-Löf random. A sequence which isn-random for everynis called arithmetically random. Then-random sequences sometimes arise when considering more complicated properties. For example, there are only countably manyΔ20{\displaystyle \Delta _{2}^{0}}sets, so one might think that these should be non-random. However, the halting probabilityΩisΔ20{\displaystyle \Delta _{2}^{0}}and 1-random; it is only after 2-randomness is reached that it is impossible for a random set to beΔ20{\displaystyle \Delta _{2}^{0}}. Additionally, there are several notions of randomness which are weaker than Martin-Löf randomness. Some of these are weak 1-randomness, Schnorr randomness, computable randomness, partial computable randomness. Yongge Wang showed[11]that Schnorr randomness is different from computable randomness. Additionally, Kolmogorov–Loveland randomness is known to be no stronger than Martin-Löf randomness, but it is not known whether it is actually weaker. At the opposite end of the randomness spectrum there is the notion of aK-trivial set. These sets are anti-random in that all initial segment is logarithmically compressible (i.e.,K(w)≤K(|w|)+b{\displaystyle K(w)\leq K(|w|)+b}for each initial segment w), but they are not computable.
https://en.wikipedia.org/wiki/Algorithmically_random_sequence
Theseven states of randomnessinprobability theory,fractalsandrisk analysisare extensions of the concept ofrandomnessas modeled by thenormal distribution. These seven states were first introduced byBenoît Mandelbrotin his 1997 bookFractals and Scaling in Finance, which appliedfractal analysisto the study of risk and randomness.[1]This classification builds upon the three main states of randomness: mild, slow, and wild. The importance ofseven states of randomnessclassification formathematical financeis that methods such asMarkowitz mean variance portfolioandBlack–Scholes modelmay be invalidated as the tails of the distribution of returns arefattened: the former relies on finitestandard deviation(volatility) and stability ofcorrelation, while the latter is constructed uponBrownian motion. These seven states build on earlier work of Mandelbrot in 1963: "The variations of certain speculative prices"[2]and "New methods in statistical economics"[3]in which he argued that moststatistical modelsapproached only a first stage of dealing withindeterminismin science, and that they ignored many aspects of real worldturbulence, in particular, most cases offinancial modeling.[4][5]This was then presented by Mandelbrot in the International Congress for Logic (1964) in an address titled "The Epistemology of Chance in Certain Newer Sciences"[6] Intuitively speaking, Mandelbrot argued[6]that the traditional normal distribution does not properly capture empirical and "real world" distributions and there are other forms of randomness that can be used to model extreme changes in risk and randomness. He observed that randomness can become quite "wild" if the requirements regarding finitemeanandvarianceare abandoned. Wild randomness corresponds to situations in which a single observation, or a particular outcome can impact the total in a very disproportionate way. The classification was formally introduced in his 1997 bookFractals and Scaling in Finance,[1]as a way to bring insight into the three main states of randomness: mild, slow, and wild. GivenNaddends,portioningconcerns the relative contribution of the addends to their sum. Byevenportioning, Mandelbrot meant that the addends were of sameorder of magnitude, otherwise he considered the portioning to beconcentrated. Given themomentof orderqof arandom variable, Mandelbrot called the root of degreeqof such moment thescale factor(of orderq). The seven states are: Wild randomness has applications outside financial markets, e.g. it has been used in the analysis of turbulent situations such as wildforest fires.[7] Using elements of this distinction, in March 2006, before the2008 financial crisis, and four years before the2010 Flash Crash, during which theDow Jones Industrial Averagehad a 1,000 point intraday swing within minutes,[8]Mandelbrot andNassim Talebpublished an article in theFinancial Timesarguing that the traditional "bell curves" that have been in use for over a century are inadequate for measuring risk in financial markets, given that such curves disregard the possibility of sharp jumps or discontinuities. Contrasting this approach with the traditional approaches based onrandom walks, they stated:[9] We live in a world primarily driven by random jumps, and tools designed for random walks address the wrong problem. Mandelbrot and Taleb pointed out that although one can assume that the odds of finding a person who is several miles tall are extremely low, similar excessive observations cannot be excluded in other areas of application. They argued that while traditional bell curves may provide a satisfactory representation of height and weight in the population, they do not provide a suitable modeling mechanism for market risks or returns, where just ten trading days represent 63 per cent of the returns between 1956 and 2006.[dubious–discuss] If the probability density ofU=U′+U″{\displaystyle U=U'+U''}is denotedp2(u){\displaystyle p_{2}(u)}, then it can be obtained by the double convolutionp2(x)=∫p(u)p(x−u)du{\displaystyle p_{2}(x)=\int p(u)p(x-u)\,du}. Whenuis known, the conditional probability density ofu′ is given by the portioning ratio: In many important cases, the maximum ofp(u′)p(u−u′){\displaystyle p(u')p(u-u')}occurs nearu′=u/2{\displaystyle u'=u/2}, or nearu′=0{\displaystyle u'=0}andu′=u{\displaystyle u'=u}. Take the logarithm ofp(u′)p(u−u′){\displaystyle p(u')p(u-u')}and write: Splitting the doubling convolution into three parts gives: p(u) is short-run concentrated in probability if it is possible to selectu~(u){\displaystyle {\tilde {u}}(u)}so that the middle interval of (u~,u−u~{\displaystyle {\tilde {u}},u-{\tilde {u}}}) has the following two properties as u→∞: Consider the formulaE⁡[Uq]=∫0∞uqp(u)du{\displaystyle \operatorname {E} [U^{q}]=\int _{0}^{\infty }u^{q}p(u)\,du}, ifp(u) is thescaling distributionthe integrand is maximum at 0 and ∞, on other cases the integrand may have a sharp global maximum for some valueu~q{\displaystyle {\tilde {u}}_{q}}defined by the following equation: One must also knowuqp(u){\displaystyle u^{q}p(u)}in the neighborhood ofu~q{\displaystyle {\tilde {u}}_{q}}. The functionuqp(u){\displaystyle u^{q}p(u)}often admits a "Gaussian" approximation given by: Whenuqp(u){\displaystyle u^{q}p(u)}is well-approximated by a Gaussian density, the bulk ofE⁡[Uq]{\displaystyle \operatorname {E} [U^{q}]}originates in the "q-interval" defined as[u~q−σ~q,u~q+σ~q]{\displaystyle [{\tilde {u}}_{q}-{\tilde {\sigma }}_{q},{\tilde {u}}_{q}+{\tilde {\sigma }}_{q}]}. The Gaussianq-intervals greatly overlap for all values ofσ{\displaystyle \sigma }. The Gaussian moments are calleddelocalized. The lognormal'sq-intervals are uniformly spaced and their width is independent ofq; therefore, if the log-normal is sufficiently skew, theq-interval and (q+ 1)-interval do not overlap. The lognormal moments are calleduniformly localized. In other cases, neighboringq-intervals cease to overlap for sufficiently highq, such moments are calledasymptotically localized.
https://en.wikipedia.org/wiki/Seven_states_of_randomness
TheWald–Wolfowitz runs test(or simplyruns test), named after statisticiansAbraham WaldandJacob Wolfowitzis anon-parametricstatistical test that checks a randomness hypothesis for a two-valueddata sequence. More precisely, it can be used totest the hypothesisthat the elements of the sequence are mutuallyindependent. Arunof a sequence is a maximal non-empty segment of the sequence consisting of adjacent equal elements. For example, the 21-element-long sequence consists of 6 runs, with lengths 4, 3, 3, 1, 6, and 4. The run test is based on thenull hypothesisthat each element in the sequence is independently drawn from the same distribution. Under the null hypothesis, the number of runs in a sequence ofNelements[note 1]is arandom variablewhoseconditional distributiongiven the observation ofN+positive values[note 2]andN−negative values (N=N++N−) is approximately normal, with:[1][2] Equivalently, the number of runs isR=12(N++N−+1−∑i=1N−1xixi+1){\displaystyle R={\frac {1}{2}}(N_{+}+N_{-}+1-\sum _{i=1}^{N-1}x_{i}x_{i+1})}. These parameters do not assume that the positive and negative elements have equal probabilities of occurring, but only assume that the elements areindependent and identically distributed. If the number of runs issignificantlyhigher or lower than expected, the hypothesis of statistical independence of the elements may be rejected. The number of runs isR=12(N++N−+1−∑i=1N−1xixi+1){\displaystyle R={\frac {1}{2}}(N_{+}+N_{-}+1-\sum _{i=1}^{N-1}x_{i}x_{i+1})}. By independence, the expectation isE[R]=12(N+1−(N−1)E[x1x2]){\displaystyle E[R]={\frac {1}{2}}(N+1-(N-1)E[x_{1}x_{2}])}Writing out all possibilities, we findx1x2={+1with probabilityN+(N+−1)+N−(N−−1)N(N−1)−1with probability2N+N−N(N−1){\displaystyle x_{1}x_{2}={\begin{cases}+1\quad &{\text{ with probability }}{\frac {N_{+}(N_{+}-1)+N_{-}(N_{-}-1)}{N(N-1)}}\\-1\quad &{\text{ with probability }}{\frac {2N_{+}N_{-}}{N(N-1)}}\\\end{cases}}}Thus,E[x1x2]=(N+−N−)2−NN(N−1){\displaystyle E[x_{1}x_{2}]={\frac {(N_{+}-N_{-})^{2}-N}{N(N-1)}}}. Now simplify the expression to getE[R]=2N+N−N+1{\displaystyle E[R]={\frac {2\ N_{+}\ N_{-}}{N}}+1}. Similarly, the variance of the number of runs isVar[R]=14Var[∑i=1N−1xixi+1]=14((N−1)E[x1x2x1x2]+2(N−2)E[x1x2x2x3]+(N−2)(N−3)E[x1x2x3x4]−(N−1)2E[x1x2]2){\displaystyle Var[R]={\frac {1}{4}}Var[\sum _{i=1}^{N-1}x_{i}x_{i+1}]={\frac {1}{4}}((N-1)E[x_{1}x_{2}x_{1}x_{2}]+2(N-2)E[x_{1}x_{2}x_{2}x_{3}]+(N-2)(N-3)E[x_{1}x_{2}x_{3}x_{4}]-(N-1)^{2}E[x_{1}x_{2}]^{2})}and simplifying, we obtain the variance. Similarly we can calculate all moments ofR{\displaystyle R}, but the algebra becomes uglier and uglier. Theorem.If we sample longer and longer sequences, withlimN+/N=p{\displaystyle \lim N_{+}/N=p}for some fixedp∈(0,1){\displaystyle p\in (0,1)}, thenR−μσ∼N(R/μ−1){\displaystyle {\frac {R-\mu }{\sigma }}\sim {\sqrt {N}}(R/\mu -1)}converges in distribution to the normal distribution with mean 0 and variance 1. Proof sketch.It suffices to prove the asymptotic normality of the sequence∑i=1N−1xixi+1{\displaystyle \sum _{i=1}^{N-1}x_{i}x_{i+1}}, which can be proven by amartingale central limit theorem. Runs tests can be used to test: TheKolmogorov–Smirnov testhas been shown to be more powerful than the Wald–Wolfowitz test for detecting differences between distributions that differ solely in their location. However, the reverse is true if the distributions differ in variance and have at the most only a small difference in location.[citation needed] The Wald–Wolfowitz runs test has been extended for use with severalsamples.[3][4][5][6]
https://en.wikipedia.org/wiki/Wald%E2%80%93Wolfowitz_runs_test
Incryptography, aproduct ciphercombines two or more transformations in a manner intending that the resulting cipher is more secure than the individual components to make it resistant tocryptanalysis.[1]The product cipher combines a sequence of simple transformations such assubstitution(S-box),permutation(P-box), andmodular arithmetic. The concept of product ciphers is due toClaude Shannon, who presented the idea in his foundational paper,Communication Theory of Secrecy Systems. A particular product cipher design where all the constituting transformation functions have the same structure is called aniterative cipherwith the term "rounds" applied to the functions themselves.[2] For transformation involving reasonable number of n message symbols, both of the foregoing cipher systems (theS-boxandP-box) are by themselves wanting. Shannon suggested using a combination of S-box and P-box transformation—a product cipher. The combination could yield a cipher system more powerful than either one alone. This approach of alternatively applying substitution and permutation transformation has been used by IBM in theLucifercipher system, and has become the standard for national data encryption standards such as theData Encryption Standardand theAdvanced Encryption Standard. A product cipher that uses only substitutions and permutations is called aSP-network.Feistel ciphersare an important class of product ciphers. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Product_cipher
Incryptography, aFeistel cipher(also known asLuby–Rackoff block cipher) is asymmetric structureused in the construction ofblock ciphers, named after theGerman-bornphysicistand cryptographerHorst Feistel, who did pioneering research while working forIBM; it is also commonly known as aFeistel network. A large number ofblock ciphersuse the scheme, including the USData Encryption Standard, the Soviet/RussianGOSTand the more recentBlowfishandTwofishciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times. Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM'sLucifercipher, designed byHorst FeistelandDon Coppersmithin 1973. Feistel networks gained respectability when the U.S. Federal Government adopted theDES(a cipher based on Lucifer, with changes made by theNSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design). A Feistel network uses around function, a function which takes two inputs – a data block and a subkey – and returns one output of the same size as the data block.[1]In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such assubstitution–permutation networksis that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible.[2]: 465[3]: 347Furthermore, theencryptionanddecryptionoperations are very similar, even identical in some cases, requiring only a reversal of thekey schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations. The structure and properties of Feistel ciphers have been extensively analyzed bycryptographers. Michael LubyandCharles Rackoffanalyzed the Feistel cipher construction and proved that if the round function is a cryptographically securepseudorandom function, withKiused as the seed, then 3 rounds are sufficient to make the block cipher apseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who getsoracleaccess to its inverse permutation).[4]Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers. Further theoretical work has generalized the construction somewhat and given more precise bounds for security.[5][6] LetF{\displaystyle \mathrm {F} }be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces: (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}}). For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute where⊕{\displaystyle \oplus }meansXOR. Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}. Decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0} Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again. The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption. Unbalanced Feistel ciphers use a modified structure whereL0{\displaystyle L_{0}}andR0{\displaystyle R_{0}}are not of equal lengths.[7]TheSkipjackcipher is an example of such a cipher. TheTexas Instrumentsdigital signature transponderuses a proprietary unbalanced Feistel cipher to performchallenge–response authentication.[8] TheThorp shuffleis an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.[9] The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, theoptimal asymmetric encryption padding(OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certainasymmetric-key encryptionschemes. A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (seeformat-preserving encryption).[9] Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example,MISTY1is a Feistel cipher using a three-round Feistel network in its round function,Skipjackis a modified Feistel cipher using a Feistel network in its G permutation, andThreefish(part ofSkein) is a non-Feistel block cipher that uses a Feistel-like MIX function. Feistel or modified Feistel: Generalised Feistel:
https://en.wikipedia.org/wiki/Feistel_scheme
Incomputer science, anonline algorithmmeasures itscompetitivenessagainst differentadversary models. For deterministic algorithms, the adversary is the same as the adaptive offline adversary. For randomized online algorithms competitiveness can depend upon theadversary modelused. The three common adversaries are the oblivious adversary, the adaptive online adversary, and the adaptive offline adversary. Theoblivious adversaryis sometimes referred to as the weak adversary. This adversary knows the algorithm's code, but does not get to know the randomized results of the algorithm. Theadaptive online adversaryis sometimes called the medium adversary. This adversary must make its own decision before it is allowed to know the decision of the algorithm. Theadaptive offline adversaryis sometimes called the strong adversary. This adversary knows everything, even the random number generator. This adversary is so strong that randomization does not help against it. From S. Ben-David,A. Borodin,R. Karp,G. Tardos,A. Wigdersonwe have:
https://en.wikipedia.org/wiki/Adversary_(online_algorithm)
Incomputer science,amortized analysisis a method foranalyzinga given algorithm'scomplexity, or how much of a resource, especially time or memory, it takes toexecute. The motivation for amortized analysis is that looking at the worst-case run time can be too pessimistic. Instead, amortized analysis averages the running times of operations in a sequence over that sequence.[1]: 306As a conclusion: "Amortized analysis is a useful tool that complements other techniques such asworst-caseandaverage-caseanalysis."[2]: 14[3] For a given operation of an algorithm, certain situations (e.g., input parametrizations or data structure contents) may imply a significant cost in resources, whereas other situations may not be as costly. The amortized analysis considers both the costly and less costly operations together over the whole sequence of operations. This may include accounting for different types of input, length of the input, and other factors that affect its performance.[2] Amortized analysis initially emerged from a method called aggregate analysis, which is now subsumed by amortized analysis. The technique was first formally introduced byRobert Tarjanin his 1985 paperAmortized Computational Complexity,[1]which addressed the need for a more useful form of analysis than the common probabilistic methods used. Amortization was initially used for very specific types of algorithms, particularly those involvingbinary treesandunionoperations. However, it is now ubiquitous and comes into play when analyzing many other algorithms as well.[2] Amortized analysis requires knowledge of which series of operations are possible. This is most commonly the case withdata structures, which havestatethat persists between operations. The basic idea is that a worst-case operation can alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost. There are generally three methods for performing amortized analysis: the aggregate method, theaccounting method, and thepotential method. All of these give correct answers; the choice of which to use depends on which is most convenient for a particular situation.[4] Consider adynamic arraythat grows in size as more elements are added to it, such asArrayListin Java orstd::vectorin C++. If we started out with a dynamic array of size 4, we could push 4 elements onto it, and each operation would takeconstant time. Yet pushing a fifth element onto that array would take longer as the array would have to create a new array of double the current size (8), copy the old elements onto the new array, and then add the new element. The next three push operations would similarly take constant time, and then the subsequent addition would require another slow doubling of the array size. In general, for an arbitrary numbern{\displaystyle n}of pushes to an array of any initial size, the times for steps that double the array add in ageometric seriestoO(n){\displaystyle O(n)}, while the constant times for each remaining push also add toO(n){\displaystyle O(n)}. Therefore the average time per push operation isO(n)/n=O(1){\displaystyle O(n)/n=O(1)}. This reasoning can be formalized and generalized to more complicated data structures using amortized analysis.[4] Shown is a Python3 implementation of aqueue, aFIFO data structure: The enqueue operation just pushes an element onto the input array; this operation does not depend on the lengths of either input or output and therefore runs in constant time. However the dequeue operation is more complicated. If the output array already has some elements in it, then dequeue runs in constant time; otherwise, dequeue takes⁠O(n){\displaystyle O(n)}⁠time to add all the elements onto the output array from the input array, wherenis the current length of the input array. After copyingnelements from input, we can performndequeue operations, each taking constant time, before the output array is empty again. Thus, we can perform a sequence ofndequeue operations in only⁠O(n){\displaystyle O(n)}⁠time, which implies that the amortized time of each dequeue operation is⁠O(1){\displaystyle O(1)}⁠.[5] Alternatively, we can charge the cost of copying any item from the input array to the output array to the earlier enqueue operation for that item. This charging scheme doubles the amortized time for enqueue but reduces the amortized time for dequeue to⁠O(1){\displaystyle O(1)}⁠.
https://en.wikipedia.org/wiki/Amortized_analysis
TheList Updateor theList Accessproblem is a simple model used in the study ofcompetitive analysisofonline algorithms. Given a set of items in a list where the cost of accessing an item is proportional to its distance from the head of the list, e.g. alinked List, and a request sequence of accesses, the problem is to come up with a strategy of reordering the list so that the total cost of accesses is minimized. The reordering can be done at any time but incurs a cost. The standard model includes two reordering actions: Performance of algorithms depend on the construction of request sequences by adversaries under variousadversary models An online algorithm for this problem has to reorder the elements and serve requests based only on the knowledge of previously requested items and hence its strategy may not have the optimum cost as compared to an offline algorithm that gets to see the entire request sequence and devise a complete strategy before serving the first request. Along with its original uses, this problem has been suggested to have a strong similarity to problems of improving global context and compressibility following aBurrows–Wheeler transform. Following this transform, files tend to have large regions with locally high frequencies, and compression efficiency is greatly improved by techniques that tend to move frequently-occurring characters toward zero, or the front of the "list". Due to this, methods and variants of Move-to-Front and frequency counts often follow the BWT algorithm to improve compressibility. An adversary is an entity that gets to choose the request sequenceσ{\displaystyle \sigma }for an algorithmALG. Depending on whetherσ{\displaystyle \sigma }can be changed based on the strategy ofALG, adversaries are given various powers, and the performance ofALGis measured against these adversaries. Anoblivious adversaryhas to construct the entire request sequenceσ{\displaystyle \sigma }before runningALG, and pays the optimal offline price,OPT(σ){\displaystyle OPT(\sigma )}which is compared againstALG(σ){\displaystyle ALG(\sigma )} Anadaptive online adversarygets to make the next request based on the previous results of the online algorithm, but pays for the request optimally and online. Anadaptive offline adversarygets to make the next request based on the previous results of the online algorithm, but pays the optimal offline cost. Competitive analysis for many list update problems were carried out without any specific knowledge of the exact nature of the optimum offline algorithm (OPT). There exist algorithm runs in O(n2l(l-1)!) time and O(l!) space wherenis the length of the request sequence andlis the length of the list.[1]The best known optimal offline algorithm dependent on request sequence length runs in O(l^2(l−1)!n) time published by Dr Srikrishnan Divakaran in 2014.[2] Paid transpositions are in general necessary for optimum algorithms. Consider a list (a,b,c) whereais at the head of the list, and a request sequencec,b,c,b. An optimal offline algorithm using only free exchanges would cost 9 (3+3+2+1), whereas an optimal offline algorithm using only paid exchanges would cost 8. So, we cannot get away with just using free transpositions for the optimum offline algorithm. The optimum list update problem was proven to beNP-hardby (Ambühl 2000). An online algorithmALGhas a competitive ratiocif for any input it performs at least as good asctimes worse than OPT. i.e. if there exists anα≥0{\displaystyle \alpha \geq 0}such that for all finite length request sequencesσ{\displaystyle \sigma },ALG(σ)−c.OPT(σ)≤α{\displaystyle ALG(\sigma )-c.OPT(\sigma )\leq \alpha }. Online algorithms can either be deterministic or randomized and it turns out that randomization in this case can truly help against oblivious adversaries. Most deterministic algorithms are variants of these three algorithms : Observe that all these use just free transpositions. It turns out that both TRANS and FC are not competitive. In a classic result usingPotential methodanalysis (Sleator & Tarjan 1985) proved that MTF is 2-competitive. The proof does not require the explicit knowledge of OPT but instead counts the number of inversions i.e. elements occurring in opposite order in the lists of MTF and OPT. Any deterministic algorithm has a lower bound of2−2l+1{\displaystyle 2-{\frac {2}{l+1}}}for a list of lengthl, and MTF is actually the optimum deterministic list update algorithm. The type of adversary doesn't matter in the case of deterministic algorithms, because the adversary can run a copy of the deterministic algorithm on their own to precompute the most disastrous sequence. Consider the following simple randomized algorithm : This algorithm is barely random - it makes all its random choices in the beginning and not during the run. It turns out that BIT breaks the deterministic bound - it isbetterthan MTF against oblivious adversaries. It is 7/4-competitive. There are other randomized algorithms, like COMB, that perform better than BIT. Boris Teia proved a lower bound of 1.5 for any randomized list update algorithm.[3] The list update problem where elements maybe inserted and deleted is called the dynamic list update problem, as opposed to the static list update problem where only accessing list elements are allowed. The upper bound of2−2l+1{\displaystyle 2-{\frac {2}{l+1}}}holds for the dynamic model as well. There are different cost models as well. In the usual full cost model, an access to an element located at a positionicostsi, but the last comparison is inevitable for any algorithm, i.e. there arei-1elements standing in the way ofi. In the partial cost model, these final comparison costs totaling to the number of elements in the request sequence are ignored. For the costs of paid transpositions other than unity,Pdmodels are used.
https://en.wikipedia.org/wiki/List_update_problem
Dynamic problemsincomputational complexity theoryare problems stated in terms of changing input data. In its most general form, a problem in this category is usually stated as follows: Problems in this class have the following measures of complexity: The overall set of computations for a dynamic problem is called adynamic algorithm. Many algorithmic problems stated in terms of fixed input data (calledstatic problemsin this context and solved bystatic algorithms) have meaningful dynamic versions. Incremental algorithms, oronline algorithms, are algorithms in which only additions of elements are allowed, possibly starting from empty/trivial input data. Decremental algorithmsare algorithms in which only deletions of elements are allowed, starting with the initialization of a full data structure. If both additions and deletions are allowed, the algorithm is sometimes calledfully dynamic. The problem may be solved in O(N) time. A well-known solution for this problem is using aself-balancing binary search tree. It takes space O(N), may be initially constructed in time O(N log N) and provides insertion, deletion and query times in O(log N). Given a graph, maintain its parameters, such as connectivity, maximal degree, shortest paths, etc., when insertion and deletion of its edges are allowed.[1] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Dynamic_algorithm
In the theory ofonline algorithmsandoptimal stopping, aprophet inequalityis a bound on theexpected valueof a decision-making process that handles a sequence of random inputs from knownprobability distributions, relative to the expected value that could be achieved by a "prophet" who knows all the inputs (and not just their distributions) ahead of time.[1][2]These inequalities have applications in the theory ofalgorithmic mechanism designandmathematical finance.[3] The classical single-item prophet inequality was published byKrengel & Sucheston (1978), crediting its tight form to D. J. H. (Ben) Garling. It concerns a process in which a sequence of random variablesXi{\displaystyle X_{i}}arrive from known distributionsDi{\displaystyle {\mathcal {D}}_{i}}. When eachXi{\displaystyle X_{i}}arrives, the decision-making process must decide whether to accept it and stop the process, or whether to reject it and go on to the next variable in the sequence. The value of the process is the single accepted variable, if there is one, or zero otherwise. It may be assumed that all variables are non-negative; otherwise, replacing negative values by zero does not change the outcome. This can model, for instance, financial situations in which the variables are offers to buy some indivisible good at a certain price, and the seller must decide which (if any) offer to accept. A prophet, knowing the whole sequence of variables, can obviously select the largest of them, achieving valuemaxiXi{\textstyle \max _{i}X_{i}}for any specific instance of this process, and expected valueE[maxiXi]{\textstyle \mathbb {E} [\max _{i}X_{i}]}.The prophet inequality states the existence of an online algorithm for this process whose expected value is at least half that of the prophet:12E[maxiXi]{\textstyle {\tfrac {1}{2}}\mathbb {E} [\max _{i}X_{i}]}.No algorithm can achieve a greater expected value for all distributions ofinputs.[3][4] One method for proving the single-item prophet inequality is to use a "threshold algorithm" that sets a parameterτ{\displaystyle \tau }and then accepts the first random variable that is at least as largeasτ{\displaystyle \tau }.If the probability that this process accepts an item isp{\displaystyle p}, then its expected value ispτ{\displaystyle p\tau }plus the expected excess overτ{\displaystyle \tau }that the selected variable (if there is one) has. Each variableXi{\displaystyle X_{i}}will be considered by the threshold algorithm with probability at least1−p{\displaystyle 1-p},and if it is considered will contributemax(Xi−τ,0){\textstyle \max(X_{i}-\tau ,0)}to the excess, so bylinearity of expectationthe expected excess is at leastE[∑i(1−p)max(Xi−τ,0)]≥(1−p)(E[maxiXi]−τ).{\displaystyle \mathbb {E} {\Bigl [}\sum _{i}(1-p)\max(X_{i}-\tau ,0){\Bigr ]}\geq (1-p){\bigl (}\mathbb {E} [\max _{i}X_{i}]-\tau ).}Settingτ{\displaystyle \tau }to the median of the distribution ofmaxiXi{\textstyle \max _{i}X_{i}},so thatp=12{\displaystyle p={\tfrac {1}{2}}},and addingpτ{\displaystyle p\tau }to this bound on expected excess, causes thepτ{\displaystyle p\tau }and(1−p)(−τ){\displaystyle (1-p)(-\tau )}terms to cancel each other, showing that for this setting ofτ{\displaystyle \tau }the threshold algorithm achieves an expected value of at least12E[maxiXi]{\textstyle {\tfrac {1}{2}}\mathbb {E} [\max _{i}X_{i}]}.[3][5]A different threshold,τ=12E[maxiXi]{\textstyle \tau ={\tfrac {1}{2}}\mathbb {E} [\max _{i}X_{i}]},also achieves at least this same expected value.[3][6] Various generalizations of the single-item prophet inequality to other online scenarios are known, and are also called prophet inequalities.[3] Prophet inequalities are related to thecompetitive analysis of online algorithms, but differ in two ways. First, much of competitive analysis assumesworst caseinputs, chosen to maximize the ratio between the computed value and the optimal value that could have been achieved with knowledge of the future, whereas for prophet inequalities some knowledge of the input, its distribution, is assumed to be known. And second, in order to achieve a certaincompetitive ratio, an online algorithm must perform within that ratio of the optimal performance on all inputs. Instead, a prophet inequality only bounds the performance in expectation, allowing some input sequences to produce worse performance as long as the average is good.[3]
https://en.wikipedia.org/wiki/Prophet_inequality
Incomputer science,streaming algorithmsare algorithms for processingdata streamsin which the input is presented as asequenceof items and can be examined in only a few passes, typicallyjust one. These algorithms are designed to operate with limited memory, generallylogarithmicin the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item. As a result of these constraints, streaming algorithms often produce approximate answers based on a summary or "sketch" of the data stream. Though streaming algorithms had already been studied by Munro and Paterson[1]as early as 1978, as well asPhilippe Flajoletand G. Nigel Martin in 1982/83,[2]the field of streaming algorithms was first formalized and popularized in a 1996 paper byNoga Alon,Yossi Matias, andMario Szegedy.[3]For this paper, the authors later won theGödel Prizein 2005 "for their foundational contribution to streaming algorithms." There has since been a large body of work centered around data streaming algorithms that spans a diverse spectrum of computer science fields such as theory, databases, networking, and natural language processing. Semi-streaming algorithmswere introduced in 2005 as a relaxation of streaming algorithms for graphs,[4]in which the space allowed is linear in the number of verticesn, but only logarithmic in the number of edgesm. This relaxation is still meaningful for dense graphs, and can solve interesting problems (such as connectivity) that are insoluble ino(n){\displaystyle o(n)}space. In the data stream model, some or all of the input is represented as a finite sequence of integers (from some finite domain) which is generally not available forrandom access, but instead arrives one at a time in a "stream".[5]If the stream has lengthnand the domain has sizem, algorithms are generally constrained to use space that islogarithmicinmandn. They can generally make only some small constant number of passes over the stream, sometimes justone.[6] Much of the streaming literature is concerned with computing statistics on frequency distributions that are too large to be stored. For this class of problems, there is a vectora=(a1,…,an){\displaystyle \mathbf {a} =(a_{1},\dots ,a_{n})}(initialized to the zero vector0{\displaystyle \mathbf {0} }) that has updates presented to it in a stream. The goal of these algorithms is to compute functions ofa{\displaystyle \mathbf {a} }using considerably less space than it would take to representa{\displaystyle \mathbf {a} }precisely. There are two common models for updating such streams, called the "cash register" and "turnstile" models.[7] In the cash register model, each update is of the form⟨i,c⟩{\displaystyle \langle i,c\rangle }, so thatai{\displaystyle a_{i}}is incremented by some positive integerc{\displaystyle c}. A notable special case is whenc=1{\displaystyle c=1}(only unit insertions are permitted). In the turnstile model, each update is of the form⟨i,c⟩{\displaystyle \langle i,c\rangle }, so thatai{\displaystyle a_{i}}is incremented by some (possibly negative) integerc{\displaystyle c}. In the "strict turnstile" model, noai{\displaystyle a_{i}}at any time may be less than zero. Several papers also consider the "sliding window" model.[citation needed]In this model, the function of interest is computing over a fixed-size window in the stream. As the stream progresses, items from the end of the window are removed from consideration while new items from the stream take their place. Besides the above frequency-based problems, some other types of problems have also been studied. Many graph problems are solved in the setting where theadjacency matrixor theadjacency listof the graph is streamed in some unknown order. There are also some problems that are very dependent on the order of the stream (i.e., asymmetric functions), such as counting the number of inversions in a stream and finding the longest increasing subsequence.[citation needed] The performance of an algorithm that operates on data streams is measured by three basic factors: These algorithms have many similarities withonline algorithmssince they both require decisions to be made before all data are available, but they are not identical. Data stream algorithms only have limited memory available but they may be able to defer action until a group of points arrive, while online algorithms are required to take action as soon as each point arrives. If the algorithm is an approximation algorithm then the accuracy of the answer is another key factor. The accuracy is often stated as an(ϵ,δ){\displaystyle (\epsilon ,\delta )}approximation meaning that the algorithm achieves an error of less thanϵ{\displaystyle \epsilon }with probability1−δ{\displaystyle 1-\delta }. Streaming algorithms have several applications innetworkingsuch as monitoring network links forelephant flows, counting the number of distinct flows, estimating the distribution of flow sizes, and so on.[8]They also have applications in databases, such as estimating the size of ajoin[citation needed]. Thekth frequency moment of a set of frequenciesa{\displaystyle \mathbf {a} }is defined asFk(a)=∑i=1naik{\displaystyle F_{k}(\mathbf {a} )=\sum _{i=1}^{n}a_{i}^{k}}. The first momentF1{\displaystyle F_{1}}is simply the sum of the frequencies (i.e., the total count). The second momentF2{\displaystyle F_{2}}is useful for computing statistical properties of the data, such as theGini coefficientof variation.F∞{\displaystyle F_{\infty }}is defined as the frequency of the most frequent items. The seminal paper of Alon, Matias, and Szegedy dealt with the problem of estimating the frequency moments.[citation needed] A direct approach to find the frequency moments requires to maintain a registermifor all distinct elementsai∈ (1,2,3,4,...,N)which requires at least memory of orderΩ(N){\displaystyle \Omega (N)}.[3]But we have space limitations and require an algorithm that computes in much lower memory. This can be achieved by using approximations instead of exact values. An algorithm that computes an (ε,δ)approximation ofFk, whereF'kis the (ε,δ)- approximated value ofFk.[9]Whereεis the approximation parameter andδis the confidence parameter.[10] Flajolet et al. in[2]introduced probabilistic method of counting which was inspired from a paper byRobert Morris.[11]Morris in his paper says that if the requirement of accuracy is dropped, a counterncan be replaced by a counterlognwhich can be stored inlog lognbits.[12]Flajolet et al. in[2]improved this method by using a hash functionhwhich is assumed to uniformly distribute the element in the hash space (a binary string of lengthL). Letbit(y,k)represent the kth bit in binary representation ofy Letρ(y){\displaystyle \rho (y)}represents the position of least significant 1-bit in the binary representation ofyiwith a suitable convention forρ(0){\displaystyle \rho (0)}. LetAbe the sequence of data stream of lengthMwhose cardinality need to be determined. LetBITMAP[0...L− 1] be the hash space where theρ(hashedvalues) are recorded. The below algorithm then determines approximate cardinality ofA. If there areNdistinct elements in a data stream. The previous algorithm describes the first attempt to approximateF0in the data stream by Flajolet and Martin. Their algorithm picks a randomhash functionwhich they assume to uniformly distribute the hash values in hash space. Bar-Yossef et al. in[10]introduced k-minimum value algorithm for determining number of distinct elements in data stream. They used a similar hash functionhwhich can be normalized to [0,1] ash:[m]→[0,1]{\displaystyle h:[m]\rightarrow [0,1]}. But they fixed a limittto number of values in hash space. The value oftis assumed of the orderO(1ε2){\displaystyle O\left({\dfrac {1}{\varepsilon _{2}}}\right)}(i.e. less approximation-valueεrequires moret). KMV algorithm keeps onlyt-smallest hash values in the hash space. After all themvalues of stream have arrived,υ=Max(h(ai)){\displaystyle \upsilon =\mathrm {Max} (h(a_{i}))}is used to calculateF0′=tυ{\displaystyle F'_{0}={\dfrac {t}{\upsilon }}}. That is, in a close-to uniform hash space, they expect at-leasttelements to be less thanO(tF0){\displaystyle O\left({\dfrac {t}{F_{0}}}\right)}. KMV algorithm can be implemented inO((1ε2)⋅log⁡(m)){\displaystyle O\left(\left({\dfrac {1}{\varepsilon _{2}}}\right)\cdot \log(m)\right)}memory bits space. Each hash value requires space of orderO(log⁡(m)){\displaystyle O(\log(m))}memory bits. There are hash values of the orderO(1ε2){\displaystyle O\left({\dfrac {1}{\varepsilon _{2}}}\right)}. The access time can be reduced if we store thethash values in a binary tree. Thus the time complexity will be reduced toO(log⁡(1ε)⋅log⁡(m)){\displaystyle O\left(\log \left({\dfrac {1}{\varepsilon }}\right)\cdot \log(m)\right)}. Alon et al. estimatesFkby defining random variables that can be computed within given space and time.[3]The expected value of random variables gives the approximate value ofFk. Assume length of sequencemis known in advance. Then construct a random variableXas follows: AssumeS1be of the orderO(n1−1/k/λ2){\displaystyle O(n^{1-1/k}/\lambda ^{2})}andS2be of the orderO(log⁡(1/ε)){\displaystyle O(\log(1/\varepsilon ))}. Algorithm takesS2random variableY1,Y2,...,YS2{\displaystyle Y_{1},Y_{2},...,Y_{S2}}and outputs the medianY{\displaystyle Y}. WhereYiis the average ofXijwhere 1 ≤j≤S1. Now calculate expectation of random variableE(X). From the algorithm to calculateFkdiscussed above, we can see that each random variableXstores value ofapandr. So, to computeXwe need to maintain onlylog(n)bits for storingapandlog(n)bits for storingr. Total number of random variableXwill be the⁠S1∗S2{\displaystyle S_{1}*S_{2}}⁠. Hence the total space complexity the algorithm takes is of the order ofO(klog⁡1ελ2n1−1k(log⁡n+log⁡m)){\displaystyle O\left({\dfrac {k\log {1 \over \varepsilon }}{\lambda ^{2}}}n^{1-{1 \over k}}\left(\log n+\log m\right)\right)} The previous algorithm calculatesF2{\displaystyle F_{2}}in order ofO(n(log⁡m+log⁡n)){\displaystyle O({\sqrt {n}}(\log m+\log n))}memory bits. Alon et al. in[3]simplified this algorithm using four-wise independent random variable with values mapped to{−1,1}{\displaystyle \{-1,1\}}. This further reduces the complexity to calculateF2{\displaystyle F_{2}}toO(log⁡1ελ2(log⁡n+log⁡m)){\displaystyle O\left({\dfrac {\log {1 \over \varepsilon }}{\lambda ^{2}}}\left(\log n+\log m\right)\right)} In the data stream model, thefrequent elements problemis to output a set of elements that constitute more than some fixed fraction of the stream. A special case is themajority problem, which is to determine whether or not any value constitutes a majority of the stream. More formally, fix some positive constantc> 1, let the length of the stream bem, and letfidenote the frequency of valueiin the stream. The frequent elements problem is to output theset{i|fi> m/c }.[13] Some notable algorithms are: Detecting events in data streams is often done using a heavy hitters algorithm as listed above: the most frequent items and their frequency are determined using one of these algorithms, then the largest increase over the previous time point is reported as trend. This approach can be refined by using exponentially weightedmoving averagesand variance for normalization.[14] Counting the number of distinct elements in a stream (sometimes called theF0moment) is another problem that has been well studied. The first algorithm for it was proposed by Flajolet and Martin. In 2010,Daniel Kane,Jelani Nelsonand David Woodruff found an asymptotically optimal algorithm for this problem.[15]It usesO(ε2+ logd)space, withO(1)worst-case update and reporting times, as well as universal hash functions and ar-wise independent hash family wherer= Ω(log(1/ε) / log log(1/ε)). The (empirical) entropy of a set of frequenciesa{\displaystyle \mathbf {a} }is defined asFk(a)=∑i=1naimlog⁡aim{\displaystyle F_{k}(\mathbf {a} )=\sum _{i=1}^{n}{\frac {a_{i}}{m}}\log {\frac {a_{i}}{m}}}, wherem=∑i=1nai{\displaystyle m=\sum _{i=1}^{n}a_{i}}. Learn a model (e.g. aclassifier) by a single pass over a training set. Lower bounds have been computed for many of the data streaming problems that have been studied. By far, the most common technique for computing these lower bounds has been usingcommunication complexity.[citation needed]
https://en.wikipedia.org/wiki/Streaming_algorithm
Incomputer science, asequential algorithmorserial algorithmis analgorithmthat is executed sequentially – once through, from start to finish, without other processing executing – as opposed toconcurrentlyor inparallel. The term is primarily used to contrast withconcurrent algorithmorparallel algorithm;most standard computer algorithms are sequential algorithms, and not specifically identified as such, as sequentialness is a background assumption. Concurrency and parallelism are in general distinct concepts, but they often overlap – manydistributed algorithmsare both concurrent and parallel – and thus "sequential" is used to contrast with both, without distinguishing which one. If these need to be distinguished, the opposing pairs sequential/concurrent and serial/parallel may be used. "Sequential algorithm" may also refer specifically to an algorithm for decoding aconvolutional code.[1] Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sequential_algorithm
Offline learningis amachine learningtraining approach in which amodelis trained on a fixeddatasetthat is not updated during the learning process.[1]This dataset is collected beforehand, and the learning typically occurs in a batch mode (i.e., the model is updated using batches of data, rather than a single input-output pair at a time). Once the model is trained, it canmake predictionson new, unseen data. Inonlinelearning, only the set of possible elements is known, whereas in offline learning, the learner also knows the order in which they are presented.[2] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Offline_learning
Incomputer security,heap sprayingis a technique used inexploitsto facilitatearbitrary code execution. The part of thesource codeof an exploit that implements this technique is called aheap spray.[1]In general, code thatsprays the heapattempts to put a certain sequence of bytes at a predetermined location in thememoryof a targetprocessby having it allocate (large) blocks on the process'sheapand fill the bytes in these blocks with the right values. A heap spray does not actually exploit any security issues but it can be used to make a vulnerability easier to exploit. A heap spray by itself cannot be used to break any security boundaries: a separate security issue is needed. Exploiting security issues is often hard because various factors can influence this process. Chance alignments of memory and timing introduce a lot of randomness (from the attacker's point of view). A heap spray can be used to introduce a large amount of order to compensate for this and increase the chances of successful exploitation. Heap sprays take advantage of the fact that on most architectures and operating systems, the start location of large heap allocations is predictable and consecutive allocations are roughly sequential. This means that the sprayed heap will roughly be in the same location each and every time the heap spray is run. Exploits often use specific bytes to spray the heap, as the data stored on the heap serves multiple roles. During exploitation of a security issue, the application code can often be made to read an address from an arbitrary location in memory. This address is then used by the code as the address of a function to execute. If the exploit can force the application to read this address from the sprayed heap, it can control the flow of execution when the code uses that address as a function pointer and redirects it to the sprayed heap. If the exploit succeeds in redirecting control flow to the sprayed heap, the bytes there will be executed, allowing the exploit to perform whatever actions the attacker wants. Therefore, the bytes on the heap are restricted to represent valid addresses within the heap spray itself, holding valid instructions for the target architecture, so the application will not crash. It is therefore common to spray with a single byte that translates to both a valid address and aNOPor NOP-like instruction on the target architecture. This allows the heap spray to function as a very largeNOP sled(for example,0x0c0c0c0cis often used as non-canonical NOP onx86[2]) Heap sprays have been used occasionally in exploits since at least 2001,[3][4]but the technique started to see widespread use in exploits forweb browsersin the summer of 2005 after the release of several such exploits which used the technique against a wide range of bugs inInternet Explorer.[5][6][7][8][9]The heap sprays used in all these exploits were very similar, which showed the versatility of the technique and its ease of use, without need for major modifications between exploits. It proved simple enough to understand and use to allow novicehackersto quickly write reliable exploits for many types ofvulnerabilitiesin web browsers and web browserplug-ins. Many web browser exploits that use heap spraying consist only of a heap spray that iscopy-pastedfrom a previous exploit combined with a small piece of script orHTMLthat triggers the vulnerability. Heap sprays for web browsers are commonly implemented inJavaScriptand spray the heap by creating largestrings. The most common technique used is to start with a string of one character andconcatenateit with itself over and over. This way, the length of the string cangrow exponentiallyup to the maximum length allowed by thescripting engine. Depending on how the browser implements strings, eitherASCIIorUnicodecharacters can be used in the string. The heap spraying code makes copies of the long string withshellcodeand stores these in an array, up to the point where enough memory has been sprayed to ensure the exploit works. Occasionally,VBScriptis used in Internet Explorer to create strings by using theStringfunction. In July 2009, exploits were found to be usingActionScriptto spray the heap inAdobe Flash.[10][11] Though it has been proven that heap-spraying can be done through other means, for instance by loading image files into the process,[12]this has not seen widespread use (as of August 2008).[needs update] In September 2012, a new technique was presented on EuSecWest 2012.[13]Two CORE researchers, Federico Muttis andAnibal Sacco, showed that the heap can be sprayed with a very high allocation granularity through the use of technologies introduced withHTML5. Specifically, they used the low-level bitmap interface offered by thecanvas API, andweb workersto do it more quickly.
https://en.wikipedia.org/wiki/Heap_spraying
Memory protectionis a way to control memory access rights on a computer, and is a part of most moderninstruction set architecturesandoperating systems. The main purpose of memory protection is to prevent aprocessfrom accessing memory that has not been allocated to it. This prevents a bug ormalwarewithin a process from affecting other processes, or the operating system itself. Protection may encompass all accesses to a specified area of memory, write accesses, or attempts to execute the contents of the area. An attempt to access unauthorized[a]memory results in a hardwarefault, e.g., asegmentation fault,storage violationexception, generally causingabnormal terminationof the offending process. Memory protection forcomputer securityincludes additional techniques such asaddress space layout randomizationandexecutable-space protection. Segmentationrefers to dividing a computer's memory into segments. A reference to a memory location includes a value that identifies a segment and an offset within that segment. A segment descriptor may limit access rights, e.g., read only, only from certainrings. Thex86architecture has multiple segmentation features, which are helpful for using protected memory on this architecture.[1]On the x86 architecture, theGlobal Descriptor TableandLocal Descriptor Tablescan be used to reference segments in the computer's memory. Pointers to memory segments on x86 processors can also be stored in the processor's segment registers. Initially x86 processors had 4 segment registers, CS (code segment), SS (stack segment), DS (data segment) and ES (extra segment); later another two segment registers were added – FS and GS.[1] In paging the memory address space or segment is divided into equal-sized blocks[b]calledpages. Usingvirtual memoryhardware, each page can reside in any location at a suitable boundary of the computer's physical memory, or be flagged as being protected. Virtual memory makes it possible to have a linearvirtual memory address spaceand to use it to access blocks fragmented overphysical memoryaddress space. Mostcomputer architectureswhich support paging also use pages as the basis for memory protection. Apage tablemaps virtual memory to physical memory. There may be a single page table, a page table for each process, a page table for each segment, or a hierarchy of page tables, depending on the architecture and the OS. The page tables are usually invisible to the process. Page tables make it easier to allocate additional memory, as each new page can be allocated from anywhere in physical memory. On some systems a page table entry can also designate a page as read-only. Some operating systems set up a different address space for each process, which provides hard memory protection boundaries.[2]It is impossible for an unprivileged[c]application to access a page that has not been explicitly allocated to it, because every memory address either points to a page allocated to that application, or generates aninterruptcalled apage fault. Unallocated pages, and pages allocated to any other application, do not have any addresses from the application point of view. A page fault may not necessarily indicate an error. Page faults are not only used for memory protection. The operating system may manage the page table in such a way that a reference to a page that has been previouslypaged outto secondary storage[d]causes a page fault. The operating system intercepts the page fault, loads the required memory page, and the application continues as if no fault had occurred. This scheme, a type ofvirtual memory, allows in-memory data not currently in use to be moved to secondary storage and back in a way which is transparent to applications, to increase overall memory capacity. On some systems, a request for virtual storage may allocate a block of virtual addresses for which no page frames have been assigned, and the system will only assign and initialize page frames when page faults occur. On some systems aguard pagemay be used, either for error detection or to automatically grow data structures. On some systems, the page fault mechanism is also used forexecutable space protectionsuch asW^X. A memory protection key (MPK)[3]mechanism divides physical memory into blocks of a particular size (e.g., 4 KiB), each of which has an associated numerical value called a protection key. Each process also has a protection key value associated with it. On a memory access the hardware checks that the current process's protection key matches the value associated with the memory block being accessed; if not, an exception occurs. This mechanism was introduced in theSystem/360architecture. It is available on today'sSystem zmainframes and heavily used bySystem zoperating systems and their subsystems. The System/360 protection keys described above are associated with physical addresses. This is different from the protection key mechanism used by architectures such as theHewlett-Packard/IntelIA-64and Hewlett-PackardPA-RISC, which are associated with virtual addresses, and which allow multiple keys per process. In the Itanium and PA-RISC architectures, translations (TLBentries) havekeys(Itanium) oraccess ids(PA-RISC) associated with them. A running process has several protection key registers (16 for Itanium,[4]4 for PA-RISC[5]). A translation selected by the virtual address has its key compared to each of the protection key registers. If any of them match (plus other possible checks), the access is permitted. If none match, a fault or exception is generated. The software fault handler can, if desired, check the missing key against a larger list of keys maintained by software; thus, the protection key registers inside the processor may be treated as a software-managed cache of a larger list of keys associated with a process. PA-RISC has 15–18 bits of key; Itanium mandates at least 18. Keys are usually associated withprotection domains, such as libraries, modules, etc. In the x86, the protection keys[6]architecture allows tagging virtual addresses for user pages with any of 16 protection keys. All the pages tagged with the same protection key constitute a protection domain. A new register contains the permissions associated with each of the protection domain. Load and store operations are checked against both the page table permissions and the protection key permissions associated with the protection domain of the virtual address, and only allowed if both permissions allow the access. The protection key permissions can be set from user space, allowing applications to directly restrict access to the application data without OS intervention. Since the protection keys are associated with a virtual address, the protection domains are per address space, so processes running in different address spaces can each use all 16 domains. InMulticsand systems derived from it, each segment has aprotection ringfor reading, writing and execution; an attempt by a process with a higher ring number than the ring number for the segment causes a fault. There is a mechanism for safely calling procedures that run in a lower ring and returning to the higher ring. There are mechanisms for a routine running with a low ring number to access a parameter with the larger of its own ring and the caller's ring. Simulationis the use of amonitoringprogramto interpret the machine code instructions of some computer architectures. Such aninstruction set simulatorcan provide memory protection by using a segmentation-like scheme and validating the target address and length of each instruction in real time before actually executing them. The simulator must calculate the target address and length and compare this against a list of valid address ranges that it holds concerning thethread'senvironment, such as any dynamicmemoryblocks acquired since the thread's inception, plus any valid shared static memory slots. The meaning of "valid" may change throughout the thread's life depending upon context. It may sometimes be allowed to alter a static block of storage, and sometimes not, depending upon the current mode of execution, which may or may not depend on a storage key or supervisor state.[citation needed] It is generally not advisable to use this method of memory protection where adequate facilities exist on a CPU, as this takes valuable processing power from the computer. However, it is generally used for debugging and testing purposes to provide an extra fine level of granularity to otherwise genericstorage violationsand can indicate precisely which instruction is attempting to overwrite the particular section of storage which may have the same storage key as unprotected storage. Capability-based addressingis a method of memory protection that is unused in modern commercial computers. In this method,pointersare replaced by protected objects (calledcapabilities) that can only be created usingprivilegedinstructions which may only be executed by the kernel, or some other process authorized to do so.[citation needed]This effectively lets the kernel control which processes may access which objects in memory, with no need to use separate address spaces orcontext switches. Only a few commercial products used capability based security:Plessey System 250,IBM System/38,Intel iAPX 432architectureandKeyKOS. Capability approaches are widely used in research systems such asEROSand Combex DARPA browser. They are used conceptually as the basis for somevirtual machines, most notablySmalltalkandJava. Currently, the DARPA-fundedCHERIproject at University of Cambridge is working to create a modern capability machine that also supports legacy software. Dynamic tainting is a technique for protecting programs from illegal memory accesses. When memory is allocated, at runtime, this technique taints both the memory and the corresponding pointer using the same taint mark. Taint marks are then suitably propagated while the program executes and are checked every time a memory addressmis accessed through a pointerp; if the taint marks associated withmandpdiffer, the execution is stopped and the illegal access is reported.[7][8] SPARC M7processors (and higher) implement dynamic tainting in hardware. Oracle markets this feature asSilicon Secured Memory(SSM) (previously branded as Application Data Integrity (ADI)).[9] ThelowRISCCPU design includes dynamic tainting under the name Tagged Memory.[10] The protection level of a particular implementation may be measured by how closely it adheres to theprinciple of minimum privilege.[11] Different operating systems use different forms of memory protection or separation. Although memory protection was common on mostmainframesand manyminicomputersystems from the 1960s, true memory separation was not used inhome computeroperating systems untilOS/2(and inRISC OS) was released in 1987. On prior systems, such lack of protection was even used as a form ofinterprocess communication, by sending apointerbetween processes. It is possible for processes to access System Memory in theWindows 9xfamily of operating systems.[12] Some operating systems that do implement memory protection include: OnUnix-likesystems, themprotectsystem callis used to control memory protection.[14]
https://en.wikipedia.org/wiki/Memory_protection_keys
Intel Software Guard Extensions(SGX) is a set ofinstruction codesimplementingtrusted execution environmentthat are built into someIntelcentral processing units(CPUs). They allowuser-levelandoperating systemcode to define protected private regions of memory, calledenclaves.[1][2]SGX is designed to be useful for implementing secureremote computation, secureweb browsing, anddigital rights management(DRM).[3]Other applications include concealment ofproprietary algorithmsand ofencryption keys.[4] SGX involvesencryptionby the CPU of a portion of memory (theenclave). Data and code originating in the enclave aredecryptedon the flywithinthe CPU,[4]protecting them from being examined or read by other code,[4]including code running at higherprivilege levelssuch as theoperating systemand any underlyinghypervisors.[1][4][2]While this can mitigate many kinds of attacks, it does not protect againstside-channel attacks.[5] A pivot by Intel in 2021 resulted in the deprecation of SGX from the 11th and 12th generationIntel Coreprocessors, but development continues on Intel Xeon for cloud and enterprise use.[6][7] SGX was first introduced in 2015 with the sixth generationIntel Coremicroprocessors based on theSkylakemicroarchitecture. Support for SGX in the CPU is indicated inCPUID"Structured Extended feature Leaf", EBX bit 02,[8]but its availability to applications requiresBIOS/UEFIsupport and opt-in enabling which is not reflected in CPUID bits. This complicates the feature detection logic for applications.[9] Emulation of SGX was added to an experimental version of theQEMUsystem emulator in 2014.[10]In 2015, researchers at theGeorgia Institute of Technologyreleased an open-source simulator named "OpenSGX".[11] One example of SGX used in security was a demo application fromwolfSSL[12]using it for cryptography algorithms. IntelGoldmont Plus(Gemini Lake) microarchitecture also contains support for Intel SGX.[13] Both in the11thand12thgenerations of Intel Core processors, SGX is listed as "Deprecated" and thereby not supported on "client platform" processors.[6][14][15]This removed support of playingUltra HD Blu-raydiscs on officially licensed software, such asPowerDVD.[16][17][18] On 27 March 2017 researchers at Austria'sGraz University of Technologydeveloped a proof-of-concept that can grabRSAkeys from SGX enclaves running on the same system within five minutes by using certain CPU instructions in lieu of a fine-grained timer to exploitcacheDRAMside-channels.[19][20]One countermeasure for this type of attack was presented and published by Daniel Gruss et al. at theUSENIXSecurity Symposium in 2017.[21]Among other published countermeasures, one countermeasure to this type of attack was published on September 28, 2017, a compiler-based tool, DR.SGX,[22]that claims to have superior performance with the elimination of the implementation complexity of other proposed solutions. The LSDS group at Imperial College London showed a proof of concept that theSpectrespeculative execution security vulnerability can be adapted to attack the secure enclave.[23]TheForeshadowattack, disclosed in August 2018, combines speculative execution and buffer overflow to bypass the SGX.[24]A security advisory and mitigation for this attack, also called an L1 Terminal Fault, was originally issued on August 14, 2018 and updated May 11, 2021.[25] On 8 February 2019, researchers at Austria'sGraz University of Technologypublished findings which showed that in some cases it is possible to run malicious code from within the enclave itself.[26]The exploit involves scanning through process memory in order to reconstruct a payload, which can then run code on the system. The paper claims that due to the confidential and protected nature of the enclave, it is impossible forantivirus softwareto detect and remove malware residing within it. Intel issued a statement, stating that this attack was outside the threat model of SGX, that they cannot guarantee that code run by the user comes from trusted sources, and urged consumers to only run trusted code.[27] There is a proliferation ofside-channel attacksplaguing modern computer architectures. Many of these attacks measure slight, nondeterministic variations in the execution of code, so the attacker needs many measurements (possibly tens of thousands) to learn secrets. However, the MicroScope attack allows a malicious OS to replay code an arbitrary number of times regardless of the program's actual structure, enabling dozens of side-channel attacks.[28]In July 2022, Intel submitted a Linux patch called AEX-Notify to allow the SGX enclave programmer to write a handler for these types of events.[29] Security researchers were able to inject timing specific faults into execution within the enclave, resulting in leakage of information. The attack can be executed remotely, but requires access to the privileged control of the processor's voltage and frequency.[30]A security advisory and mitigation for this attack was originally issued on August 14, 2018 and updated on March 20, 2020.[31] Load Value Injection[32][33]injects data into a program aiming to replace the value loaded from memory which is then used for a short time before the mistake is spotted and rolled back, during which LVI controls data and control flow. A security advisory and mitigation for this attack was originally issued on March 10, 2020 and updated on May 11, 2021.[34] SGAxe,[35]an SGX vulnerability published in 2020, extends aspeculative execution attackon cache,[36]leaking content of the enclave. This allows an attacker to access private CPU keys used for remote attestation.[37]In other words, a threat actor can bypass Intel's countermeasures to breach SGX enclaves' confidentiality. TheSGAxe attackis carried out by extracting attestation keys from SGX's private quoting enclave that are signed by Intel. The attacker can then masquerade as legitimate Intel machines by signing arbitrary SGX attestation quotes.[38]A security advisory and mitigation for this attack, also called a Processor Data Leakage or Cache Eviction, was originally issued January 27, 2020 and updated May 11, 2021.[39] In 2022, security researchers discovered a vulnerability in theAdvanced Programmable Interrupt Controller(APIC) that allows for an attacker with root/admin privileges to gain access to encryption keys via the APIC by inspecting data transfers from L1 and L2cache.[40]This vulnerability is the first architectural attack discovered onx86CPUs. This differs from Spectre and Meltdown which use a noisyside channel. This exploit currently affects Intel Core 10th, 11th and 12th generations, and Xeon Ice Lake microprocessors.[41][42] The code signature is generated with a private key that is only in the enclave. The private key is encoded via “fuse” elements on the chip. In the process, bits are burnt through, giving them the binary value 0. This private key cannot be extracted because it is encoded in the hardware. Mark Ermolov, Maxim Goryachy and Dmitry Sklyarov refuted the claim to trustworthiness of the SGX concept athttps://github.com/chip-red-pill/glm-ucode#. There has been a long debate on whether SGX enables creation of superior malware.Oxford Universityresearchers published an article in October 2022[43]considering attackers' potential advantages and disadvantages by abusing SGX for malware development. Researchers conclude that while there might be temporary zero-day vulnerabilities to abuse in SGX ecosystem, the core principles and design features of Trusted Execution Environments (TEEs) make malware weaker than a malware-in-the-wild, TEEs make no major contributions to malware otherwise.
https://en.wikipedia.org/wiki/Software_Guard_Extensions
Microsoft Developer Network(MSDN) was the division ofMicrosoftresponsible for managing the firm's relationship with developers and testers, such as hardware developers interested in theoperating system(OS), and software developers developing on the various OS platforms or using the API or scripting languages of Microsoft's applications. The relationship management was situated in assorted media:web sites,newsletters,developer conferences, trade media,blogsandDVDdistribution.[citation needed] Starting in January 2020, the website was fully integrated withMicrosoft Docs(itself integrated intoMicrosoft Learnin 2022).[1] MSDN's primary web presence atmsdn.microsoft.comwas a collection of sites for the developer community that provided information, documentation, and discussion that was authored both by Microsoft and by the community at large. Microsoft later began placing emphasis on incorporation of forums, blogs, library annotations and social bookmarking to make MSDN an open dialog with the developer community rather than a one-way service.[2]The main website, and most of its constituent applications below were available in 56[3]or more languages. MSDN Librarywas a library of official technical documentation intended for independentdevelopersof software forMicrosoft Windows. MSDN Library documented theAPIsthat ship with Microsoft products and also included sample code, technical articles, and other programming information. The library was freely available on the web, withCDsandDVDsof the most recent materials initially issued quarterly as part of an MSDN subscription. However, beginning in 2006, they were available to be freely downloaded from Microsoft Download Center in the form ofISO images.[4][5][6] Visual Studio Expressedition integrated only with MSDN Express Library, which was a subset of the full MSDN Library, although either edition of the MSDN Library could be freely downloaded and installed standalone. InVisual Studio 2010MSDN Library was replaced with the new Help System, which was installed as a part of Visual Studio 2010 installation. Help Library Manager was used to install Help Content books covering selected topics. In 2016, Microsoft introduced the new technical documentation platform, Microsoft Docs, intended as a replacement of the TechNet and MSDN libraries.[7][8]Over the next two years, the content of the MSDN Library was gradually migrated into Microsoft Docs.[9]In 2022, Microsoft Docs was itself incorporated intoMicrosoft Learn. MSDN Library pages now redirect to the corresponding Microsoft Learn pages.[citation needed] Each edition of MSDN Library could only be accessed with one help viewer (Microsoft Document Explorer or other help viewer), which was integrated with thethen currentsingle version or sometimes two versions of Visual Studio. In addition, each new version of Visual Studio did not integrate with an earlier version of MSDN. A compatible MSDN Library was released with each new version of Visual Studio and included on the Visual Studio DVD. As newer versions of Visual Studio were released, newer editions of MSDN Library did not integrate with older Visual Studio versions and did not even include old/obsolete documentation for deprecated or discontinued products. MSDN Library versions could be installed side-by-side, that is, both the older as well as the newer versions of MSDN Library could co-exist.[citation needed] MSDN Forums were theweb-based forumsused by the community to discuss a wide variety of software development topics. MSDN Forums were migrated to an all-new platform during 2008 that provided new features designed to improve efficiency such as inline preview of threads,AJAXfiltering, and a slide-up post editor. MSDN blogs was a series ofblogsthat were hosted under Microsoft'sdomainblogs.msdn.com. Some blogs were dedicated to a product – e.g.Visual Studio,[10]Internet Explorer,[11]PowerShell[12]– or a version of a product – e.g.Windows 7,[13]Windows 8[14]– while others belonged to a Microsoft employee, e.g.Michael Howard[15]or Raymond Chen.[16]In May 2020, the MSDN and TechNet blogs were closed and the content was archived at Microsoft Docs.[17] Social bookmarkingon MSDN Social was first launched in 2008, built on a new web platform that haduser-taggingandfeedsat its core. The goal of the social bookmarking application was to provide a method whereby members of the developer community could: The initial release of the application provided standard features for the genre, including abookmarkletand import capabilities. The MSDN web site was also starting to incorporate feeds of social bookmarks from experts and the community, displayed alongside feeds from relevant bloggers.[18] The social bookmarking feature was discontinued on October 1, 2009. MSDN Gallery was a repository of community-authored code samples and projects. Launched in 2008, the purpose of the site evolved to complementCodeplex, theopen-sourceproject hosting site fromMicrosoft. MSDN Gallery was retired in 2002 and all MSDN pages now redirect to the new code samples experience on Microsoft Learn.[19] MSDN had historically offered a subscription package whereby developers had access and licenses to use nearly all Microsoft software that had ever been released to the public. Subscriptions were sold on an annual basis, and cost anywhere from US$1,000 to US$6,000 per year per subscription, as it was offered in several tiers. Although in most cases the software itself functioned exactly like the full product, the MSDN end-user license agreement[20]prohibited use of the software in a business production environment. This was a legal restriction, not a technical one. An exception was made[20]forMicrosoft Office, allowing personal useeven for business purposeswithout a separate license—but only with the "MSDN Premium Subscription" and even so only "directly related to the design, development and test and/or documentation of software projects;" thisdoes not terminate[21] Microsoft provided editorial content forMSDN Magazine, a monthly publication. The magazine was created as a merger betweenMicrosoft Systems Journal(MSJ) andMicrosoft Internet Developer(MIND) magazines in March 2000.[22][23] MSJ back issues were available online.[24]MSDN Magazinewas available as a print magazine in the United States, and online in 11 languages. The last issue of the magazine was released in November 2019.[25] Microsoft Systems Journal[26]was[27]a 1986-founded[28]bi-monthlyMicrosoftmagazine. MSDN was launched in September 1992[29]as a quarterly, CD-ROM-based compilation of technical articles, sample code, and software development kits. The first two MSDN CD releases (September 1992 and January 1993) were marked as pre-release discs (P1 and P2, respectively).[30]Disc 3, released in April 1993, was the first full release. In addition to CDs, there was a 16-page tabloid newspaper,Microsoft Developer Network News, edited byAndrew Himes, who had previously been the founding editor ofMacTech, the premiere Macintosh technology journal.[31]A Level II subscription was added in 1993, that included the MAPI, ODBC, TAPI and VFW SDKs.[32] MSDN2 was opened in November 2004 as a source forVisual Studio 2005APIinformation, with noteworthy differences being updated web site code, conforming better toweb standardsand thus giving a long-awaited improved support for alternativeweb browserstoInternet Explorerin the API browser. In 2008, the original MSDN cluster was retired and MSDN2 became msdn.microsoft.com.[33] In 1996,Bob Gundersonbegan writing a column inMicrosoft Developer Network News, edited byAndrew Himes, using the pseudonym "Dr.GUI". The column provided answers to questions submitted by MSDN subscribers. The caricature of Dr. GUI was based on a photo of Gunderson. When he left the MSDN team,Dennis Craintook over the Dr. GUI role and added medical humor to the column. Upon his departure, Dr. GUI became the composite identity of the original group (most notably Paul Johns) of Developer Technology Engineers that provided in-depth technical articles to the Library. The early members included: Bob Gunderson,Dale Rogerson,Rüdiger R. Asche,Ken Lassesen, Nigel Thompson (a.k.a. Herman Rodent),Nancy Cluts, Paul Johns, Dennis Crain, andKen Bergmann. Nigel Thompson was the development manager for Windows Multimedia Extensions that originally added multimedia capabilities to Windows. Renan Jeffreis produced the original system (Panda) to publish MSDN on the Internet and in HTML instead of the earlier multimedia viewer engine. Dale Rogerson, Nigel Thompson and Nancy Cluts all published MS Press books while on the MSDN team. As of August 2010, only Dennis Crain and Dale Rogerson remain employed by Microsoft.
https://en.wikipedia.org/wiki/MSDN_Library
TheWindows Driver Kit(WDK) is a software toolset fromMicrosoftthat enables the development ofdevice driversfor theMicrosoft Windowsplatform.[2]It includes documentation, samples, build environments, and tools for driver developers.[3]A complete toolset for driver development also need the following: a compilerVisual Studio,Windows SDK, andWindows HLK. Previously, the WDK was known as Device Development Kit (DDK)[4]for Windows 3.x and Windows 9x. It supported the development ofVxDdrivers. Later versions for Windows NT and Windows 98SE and ME were called Driver Development Kit (DDK)[5]and supportedWindows Driver Model(WDM) development. It got its current name when Microsoft releasedWindows Vistaand added the following previously separated tools to the kit:Installable File SystemKit (IFS Kit),Driver Test Manager(DTM), though DTM was later renamed and removed from WDK again. The DDK for Windows 2000 and earlier versions did not include a compiler; instead one had to installVisual C++separately to compile drivers. From the version for Windows XP the DDK and later the WDK include a command-line compiler to compile drivers. One of the reasons Microsoft gave for including a compiler was that the quality of drivers would improve if they were compiled with the same version of the compiler that was used to compile Windows itself while Visual C++ is targeted to application development and has a different product cycle with more frequent changes. The WDK 8.x and later series goes back to require installing a matched version ofVisual Studioseparately, but this time the integration is more complete in that you can edit, build and debug the driver from within Visual Studio directly. Note: Windows NT DDK, Windows 98 DDK and Windows 2000 DDK are no longer made available by Microsoft because of Java-related settlements made by Microsoft with Sun Microsystems.[6]
https://en.wikipedia.org/wiki/Windows_Driver_Kit
Windows App SDK(formerly known asProject Reunion)[3]is asoftware development kit(SDK) fromMicrosoftthat provides a unified set of APIs and components that can be used to developdesktop applicationsfor bothWindows 11andWindows 10version 1809 and later. The purpose of this project is to offer a decoupled implementation of capabilities which were previously tightly-coupled to the UWP app model.[4]Windows App SDK allows nativeWin32(USER32/GDI32) or.NET(WPF/WinForms) developers alike a path forward to enhance their apps with modern features.[4] It follows that Windows App SDK is not intended to replace theWindows SDK.[4]By exposing a commonapplication programming interface(API) primarily using theWindows Runtime(WinRT) through generatedWinMDmetadata, the tradeoffs which once characterized either app model are largely eliminated.NuGetpackages for version 1.4 were released in August 2023 after approximately four months of development.[5] While Microsoft has developed a number of new features, some of the features listed below are abstractions of functionality provided by existing APIs.[4] Most of the investment[6]into the decoupled UI stack[7]has gone towards bug fixes, improvements to the debugging experience, and simplifying the window management capabilities made possible by switching from CoreWindow. An API abstracting USER32/GDI32 primitives known asAppWindowwas introduced to expose a unified set of windowing capabilities[8]and enable support for custom window controls. A replacement for the UWP WebView control was announced early on.[9]This is because it was based on anunsupported browser engine.[10]A newChromium-based control, namedWebView2, was developed and can be used from WinUI as well as other supported app types. WhileMSIXis included in the Windows App SDK and considered to be the recommended application packaging format,[11][12]a design goal was to allow for unpackaged apps. These apps can be deployed as self-contained or framework-dependent. Support for dynamic loading of app dependencies is included for both packaged and unpackaged apps.[13] DWriteCoreis being developed as a decoupled and device-independent solution for high-quality text rendering.[14]Win2Dhas also been made available to WinUI 3 apps.[15] MRT Coreallows for management of appresourcesfor purposes such as localization. It is a decoupled version of the resource management system from UWP.[16] With the stable releases delivered after its initial launch, Windows App SDK now supports several app lifecycle features which previously required a considerable amount of effort for developers to implement in Win32 applications. These features includepower managementnotifications, rich activation, multiple instances, and programmatic app restart.[17] Support forpush notificationswas initially implemented as a limited-access, preview feature.[18]However, the APIs for it have since been stabilized and push notifications can be delivered to app users. Official documentation states that access to the feature can be revoked by Microsoft at their discretion.[18][19]Additionally, apps can now easily display local app notifications without the need to create anXMLpayload.[20] Third-party integration with the Windows Widgets system in Windows 11 has been included as part of the stable release channel.[21]Developers can design custom widgets for their app using adaptive cards[22]and surface them on the widgets board.[23]
https://en.wikipedia.org/wiki/Windows_App_SDK
Windows 10is a major release of theWindows NToperating systemdeveloped byMicrosoft. Microsoft described Windows 10 as an "operating system as a service" that would receive ongoing updates to its features and functionality, augmented with the ability for enterprise environments to receive non-critical updates at a slower pace or use long-term support milestones that will only receive critical updates, such as security patches, over their five-year lifespan of mainstream support. It was released in July 2015. Windows 10 InsiderPreview builds are delivered to Insiders in three different channels (previously "rings").[1]Insiders in the Dev Channel (previouslyFast ring) receive updates prior to those in the Beta Channel (previouslySlow ring), but might experience more bugs and other issues.[2][3]Insiders in the Release Preview Channel (previouslyRelease Preview ring) do not receive updates until the version is almost available to the public, but are comparatively more stable.[4] Mainstream builds of Windows 10 are labeled "YYMM", with YY representing the two-digit year and MM representing the month of planned release (for example, version 1507 refers to builds which initially released in July 2015). Starting with version 20H2, Windows 10 release nomenclature changed from the year and month pattern to a year and half-year pattern (YYH1, YYH2).[5] The second stable build of Windows10 isversion 1511(build number 10586), known as theNovember Update. It was codenamed "Threshold 2" (TH2) during development. This version was distributed via Windows Update on November 12, 2015. It contains various improvements to the operating system, its user interface, bundled services, as well as the introduction of Skype-based universal messaging apps, and the Windows Store for Business and Windows Update for Business features.[6][7][8][9] On November 21, 2015, the November Update was temporarily pulled from public distribution.[10][11]The upgrade was re-instated on November 24, 2015, with Microsoft stating that the removal was due to a bug that caused privacy and data collection settings to be reset to defaults when installing the upgrade.[12] The third stable build of Windows 10 is calledversion 1607, known as theAnniversary Update. It was codenamed "Redstone 1" (RS1) during development. This version was released on August 2, 2016, a little over one year after the first stable release of Windows 10.[13][14][15][16]The Anniversary Update was originally thought to have been set aside for two feature updates. While both were originally to be released in 2016, the second was moved into 2017 so that it would be released in concert with that year's wave of Microsoft first-party devices.[17][18][14] The Anniversary Update introduces new features such as the Windows Ink platform, which eases the ability to add stylus input support to Universal Windows Platform apps and provides a new "Ink Workspace" area with links to pen-oriented apps and features,[19][14]enhancements to Cortana's proactive functionality,[20]a dark user interface theme mode, a new version ofSkypedesigned to work with the Universal Windows Platform, improvements to Universal Windows Platform intended for video games,[13]and offline scanning usingWindows Defender.[21]The Anniversary Update also supportsWindows Subsystem for Linux, a new component that provides an environment for runningLinux-compatible binary software in anUbuntu-based user mode environment.[22] On new installations of Windows 10 on systems withSecure Bootenabled, all kernel-mode drivers issued after July 29, 2015, must be digitally signed with anExtended Validation Certificateissued by Microsoft.[23] This version is the basis for "LTSB 2016", the first upgrade to the LTSB since Windows 10's release. The first LTSB release, based on RTM (version 1507), has been retroactively named "LTSB 2015". The fourth stable build of Windows 10 is calledversion 1703, known as theCreators Update. It was codenamed "Redstone 2" (RS2) during development. This version was announced on October 26, 2016,[24][25]and was released forgeneral availabilityon April 11, 2017,[26][27]and for manual installation via Windows 10 Upgrade Assistant and Media Creation Tool tools on April 5, 2017.[28]This update primarily focuses on content creation, productivity, and gaming features—with a particular focus onvirtualandaugmented reality(includingHoloLensandvirtual reality headsets) and on aiding the generation of three-dimensional content. It supports a new virtual reality workspace designed for use with headsets; Microsoft announced that several OEMs planned to release VR headsets designed for use with the Creators Update.[27][26][29] Controls for the Game Bar and Game DVR feature have moved to the Settings app, while a new "Game Mode" option allows resources to be prioritized towards games.[30]Integration with Microsoft acquisitionMixer(formerly Beam)[31]was added for live streaming.[30]The themes manager moved to Settings app, and custom accent colors are now possible.[30]The new appPaint 3Dallows users to produce artwork using 3D models; the app is designed to make 3D creation more accessible to mainstream users.[32] Windows 10's privacy settings have more detailed explanations of data that the operating system may collect. Additionally, the "enhanced" level of telemetry collection was removed.[30]Windows Update notifications may now be "snoozed" for a period of time, the "active hours" during which Windows will not try to install updates may now extend up to 18 hours in length, and updates may be paused for up to seven days.[30]Windows Defender has been replaced by the universal appWindows Defender Security Center.[30]Devices may optionally be configured to prevent use of software from outside of Microsoft Store, or warn before installation of apps from outside of Microsoft Store.[33]"Dynamic Lock" allows a device to automatically lock if it is outside of the proximity of a designatedBluetoothdevice, such as a smartphone.[34]A "Night Light" feature was added, which allows the user to change thecolor temperatureof the display to the red part of the spectrum at specific times of day (similarly to the third-party softwaref.lux).[35] The fifth stable build of Windows 10 is calledversion 1709, known as theFall Creators Update. It was codenamed "Redstone 3" (RS3) during development. This version was released on October 17, 2017.[36][37][38]Version 1709 introduces a new feature known as "My People", where shortcuts to "important" contacts can be displayed on the taskbar. Notifications involving these contacts appear above their respective pictures, and users can communicate with the contact via eitherSkype, e-mail, or text messaging (integrating withAndroidandWindows 10 Mobiledevices). Support for additional services, including Xbox,Skype for Business, and third-party integration, are to be added in the future. Files can also be dragged directly to the contact's picture to share them.[39]My People was originally announced for Creators Update, but was ultimately held over to the next release,[40][41]and made its first public appearance in Build 16184 in late April 2017.[37]A new "Files-on-Demand" feature for OneDrive serves as a partial replacement for the previous "placeholders" function.[42] It also introduces a new security feature known as "controlled folder access", which can restrict the applications allowed to access specific folders. This feature is designed mainly to defend against file-encryptingransomware.[43]This is also the first release that introduces DCH drivers.[citation needed] The sixth stable build of Windows 10 is calledversion 1803, known as theApril 2018 Update. It was codenamed "Redstone 4" (RS4) during development. This version was released as a manual download on April 30, 2018, with a broad rollout on May 8, 2018.[44][45]This update was originally meant to be released on April 10, but was delayed because of a bug which could increase chances of a "Blue Screen of Death" (Stop error).[46] The most significant feature of this build is Timeline, which is displayed within Task View. It allows users to view a list of recently used documents and websites from supported applications ("activities"). When users consent to Microsoft data collection viaMicrosoft Graph, activities can also be synchronized from supportedAndroidandiOSdevices.[47][48][49][42] The seventh stable build of Windows 10 is calledversion 1809, known as theOctober 2018 Update. It was codenamed "Redstone 5" (RS5) during development. This version was released on October 2, 2018.[50]Highlighted features on this build include updates to the clipboard function (including support for clipboard history and syncing with other devices),SwiftKeyvirtual keyboard, Snip & Sketch, and File Explorer supporting the dark color scheme mode.[51] On October 6, 2018, the build was pulled by Microsoft following isolated reports of the update process deleting files from user directories.[52]It was re-released to Windows Insider channel on October 9, with Microsoft citing a bug in OneDrive's Known Folder Redirection function as the culprit.[53][54] On November 13, 2018, Microsoft resumed the rollout of 1809 for a small percentage of users.[55][56] The long term servicing release, Windows 10 Enterprise 2019 LTSC, is based on this version and is equivalent in terms of features.[57] The eighth stable build of Windows 10,version 1903, codenamed "19H1", was released for general availability on May 21, 2019, after being on the Insider Release Preview branch since April 8, 2019.[58]Because of new practices introduced after the problems affecting the 1809 update, Microsoft used an intentionally slower Windows Update rollout process.[59][60][61] New features in the update include a redesigned search tool—separated from Cortana and oriented towards textual queries, a new "Light" theme (set as default on Windows 10Home) using a white-colored taskbar with dark icons, the addition of symbols andkaomojito the emoji input menu, the ability to "pause" system updates, automated "Recommended troubleshooting", integration withGoogle Chromeon Timeline via an extension, support for SMS-based authentication on accounts linked to Microsoft accounts, and the ability to run Windows desktop applications within the Windows Mixed Reality environment (previously restricted to universal apps andSteamVRonly). A new feature onPro,Education, andEnterpriseknown as Windows Sandbox allows users to run applications within a securedHyper-Venvironment.[62][63] A revamped version of Game Bar was released alongside 1903, which redesigns it into a larger overlay with a performance display, Xbox friends list and social functionality, and audio and streaming settings.[64] The ninth stable build of Windows 10,version 1909, codenamed "19H2", was released to the public on November 12, 2019, after being on the Insider Release Preview branch since August 26, 2019.[65]Unlike previous updates, this one was released as a minor service update without major new features.[66] The tenth stable build of Windows 10,version 2004, codenamed "20H1", was released to the public on May 27, 2020, after being on the Insider Release Preview branch since April 16, 2020.[67]New features included faster and easier access to Bluetooth settings and pairing, improvedKaomojis, renamable virtual desktops,DirectX 12 Ultimate, a chat-based UI for Cortana, greater integration with Android phones on the Your Phone app,Windows Subsystem for Linux 2(WSL 2; WSL 2 includes a customLinux kernel, unlike its predecessor), the ability to use Windows Hello without the need for a password, improved Windows Search with integration with File Explorer, a cloud download option to reset Windows, accessibility improvements, and the ability to view disk drive type and discrete graphics card temperatures in Task Manager.[68][69] The eleventh stable build of Windows 10,version 20H2, was released to the public on October 20, 2020, after being on the Beta Channel since June 16, 2020.[70]New features include new theme-aware tiles in the Start Menu, new features and improvements toMicrosoft Edge(such as a price comparison tool,Alt+Tab ↹integration for tab switching, and easy access to pinned tabs), a new out-of-box experience with more personalization for the taskbar, notifications improvements, improvements to tablet mode, improvements to Modern Device Management, and the move of the System tab in Control Panel to the About page in Settings. This is the first version of Windows 10 to include the new Chromium-based Edge browser by default.[71][72][73] The twelfth stable build of Windows 10,version 21H1, was released to the public on May 18, 2021, after being on the Beta Channel since February 17, 2021.[74]This update included multi-camera support for Windows Hello, a "News and Interests" feature on the taskbar, and performance improvements toWindows Defender Application GuardandWMIGroup Policy Service.[75] The thirteenth stable build of Windows 10,version 21H2, was released to the public on November 16, 2021, after being on the Beta Channel since July 15, 2021.[76][77]This update includedGPU computesupport in theWindows Subsystem for Linux(WSL) and Azure IoT Edge for Linux on Windows (EFLOW) deployments, a new simplifiedpasswordlessdeployment models for Windows Hello for Business, support forWPA3Hash-to-Element (H2E) standards and a new highlights feature for Search on the taskbar. The fourteenth and final stable build of Windows 10,version 22H2, was released to the public on October 18, 2022, after being on the Release Preview Channel since July 28, 2022.[78][79][80]This update re-introduced the search box on the taskbar and includedCopilotin Windows, richer weather experience on the lock screen, additional quick status (such as sports, traffic and finance) on lock screen and a newWindows Spotlightdesktop theme and new account manager experience on the Start menu. On December 16, 2019, Microsoft announced that Windows Insiders in the Fast ring will receive builds directly from thers_prereleasebranch, which are not matched to a specific Windows 10 release. The first build released under the new strategy, build 19536, was made available to Insiders on the same day.[81] Themn_releasebranch was available from May 13, 2020, to June 17, 2020.[82][83]The branch was mandatory for Insiders in the Fast ring.[83] As of June 15, 2020, Microsoft has introduced the "channels" model to its Windows Insider Program, succeeding its "ring" model.[106]All future builds starting from build 10.0.20150, therefore, would be released to Windows Insiders in the Dev Channel.[82] Thefe_releasebranch was available from October 29, 2020, to January 6, 2021.[107][108]The branch was mandatory for Insiders until December 10. Afterward, Insiders could choose to move back to thers_prereleasebranch.[109] Theco_releasebranch was available from April 5 to June 14, 2021.[110]The branch was mandatory for Insiders. As of June 28, 2021, the Dev Channel has transitioned toWindows 11.[111]
https://en.wikipedia.org/wiki/Windows_10_version_history
Intelecommunicationsandcomputing,backward compatibility(orbackwards compatibility) is a property of anoperating system, software, real-world product, ortechnologythat allows forinteroperabilitywith an olderlegacy system, or withinputdesigned for such a system. Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility.[1]Such breaking usually incurs various types of costs, such asswitching cost. A complementary concept isforward compatibility; a design that is forward-compatible usually has aroadmapfor compatibility with future standards and products.[2] A simple example of both backward and forward compatibility is the introduction ofFM radioinstereo. FM radio was initiallymono, with only one audio channel represented by onesignal. With the introduction of two-channel stereo FM radio, many listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved by sending the sum of both left and right audio channels in one signal and the difference in another signal. That allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, which is necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, and they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen.[3] Full backward compatibility is particularly important in computerinstruction set architectures, two of the most successful being theIBM360/370/390/Zseriesfamilies of mainframes, and theIntelx86family ofmicroprocessors. IBM announced the first 360 models in 1964 and has continued to update the series ever since, with migration over the decades from 32-bit register/24-bit addresses to 64-bit registers and addresses. Intel announced the firstIntel 8086/8088processors in 1978, again with migrations over the decades from 16-bit to 64-bit. (The 8086/8088, in turn, were designed with easymachine-translatabilityof programs written for its predecessor in mind, although they were not instruction-set compatible with the 8-bitIntel 8080processor of 1974. TheZilog Z80, however, was fully backward compatible with the Intel 8080.) Fully backward compatible processors can process the samebinary executable software instructionsas their predecessors, allowing the use of a newer processor without having to acquire newapplicationsoroperating systems.[4]Similarly, the success of theWi-Fidigital communication standard is attributed to its broad forward and backward compatibility; it became more popular than other standards that were not backward compatible.[5] In software development, backward compatibility is a general notion of interoperation between software pieces that will not produce any errors when its functionality is invoked viaAPI.[6]The software is considered stable when itsAPIthat is used to invoke functions is stable across different versions.[6] In operating systems, upgrades to newer versions are said to be backward compatible if executables and other files from the previous versions will work as usual.[7] Incompilers, backward compatibility may refer to the ability of a compiler for a newer version of the language to accept source code of programs or data that worked under the previous version.[8] A data format is said to be backward compatible when a newer version of the program can open it without errors just like its predecessor.[9] There are several incentives for a company to implement backward compatibility. One is that it can be used to preserve older software that would have otherwise been lost when a manufacturer decides to stop supporting older hardware. A great example of this approach would be that ofvideo games, since it is a common example used when discussing the value of supporting older software. The cultural impact of video games is a large part of their continued success, and some believe ignoring backward compatibility would cause these titles to disappear.[10]Backward compatibility also acts as a selling point for new hardware, as an existing player base can more affordablyupgradeto subsequent generations of a console. This also helps to make up for the lack of titles at the launch of new systems, as users can pull from the previous console's library of games while developers transition to the new hardware.[11]Backward compatibility with the originalPlayStation(PS) software discs and peripherals is considered to have been a key selling point for thePlayStation 2(PS2) during its early months on the market.[12][13]Moreover, studies in the mid-1990s found that even consumers who never play older games after purchasing a new system consider backward compatibility a highly desirable feature, valuing the mere ability to continue to play an existing collection of games even if they choose never to do so.[13] Despite not being included at launch, Microsoft slowly incorporated backward compatibility for select titles on theXbox Oneseveral years into its product life cycle.[14]Players have racked up over a billion hours with backward-compatible games on Xbox. A large part of the success and implementation of this feature is that the hardware within newer generation consoles is both powerful and similar enough to legacy systems that older titles can be broken down and re-configured to run on the Xbox One.[15]This program has proven incredibly popular with Xbox players and goes against the recent trend of studio-made remasters of classic titles, creating what some believe to be an important shift in console makers' strategies.[14]The current generation of consoles such as thePlayStation 5(PS5)[16]andXbox Series X/Salso support this feature as well. The monetary costs of supporting old software is considered to be a large drawback to the usage of backward compatibility.[11][13]The associated costs of backward compatibility are a largerbill of materialsif hardware is required to support the legacy systems; increased complexity of the product that may lead to longertime to market, technological hindrances, and slowing innovation; and increased expectations from users in terms of compatibility.[1]Furthermore, it also introduces the risk that developers will favor developing games that are compatible with both the old and new systems, since this gives them a larger base of potential buyers, resulting in a dearth of software which uses the advanced features of the new system.[13]Because of this, several console manufacturers phased out backward compatibility towards the end of the console generation in order to reduce cost and briefly reinvigorate sales before the arrival of newer hardware.[17]One such example of this approach was thePlayStation 3(PS3), where it had removed backward compatibility with PlayStation 2 (PS2) games on later systems (which includes eliminating the onboardEmotion Engineand Graphics Synthesizer hardware chips that were previously used on earlier revisions) to reduce hardware costs and improve console sales. Despite this, it is still possible to bypass some of these hardware costs. For instance, earlier PS2 systems had the core of the original PlayStation (PS1) CPU integrated into theI/Oprocessor for dual-purpose use; it could act as either the main CPU in PS1 mode or it canup-clockitself to offload I/O in PS2 mode. The original I/O core was replaced with aPowerPC-based core in later systems to serve the same functions, emulating the same functions as the PS1 CPU core. Such an approach can backfire, however, as was the case of theSuper Nintendo Entertainment System(Super NES). It opted for the more peculiar65C816CPU over the more popular 16-bit microprocessors on the basis that it would allow for easier backwards compatibility with the originalNintendo Entertainment System(NES) due to the 65C816's software compatibility with the6502CPU in emulation mode, but ultimately did not proved to be workable once the rest of the Super NES's architecture was designed.[18]
https://en.wikipedia.org/wiki/Backward_compatibility
Retrofittingis the addition of new technology or features to older systems. Retrofits can happen for a number of reasons, for example with big capital expenditures like naval vessels, military equipment or manufacturing plants, businesses or governments may retrofit in order to reduce the need to replace a system entirely. Other retrofits may be due to changing codes or requirements, such asseismic retrofitwhich are designed strengthening older buildings in order to make them earthquake resistant. Retrofitting is also an important part ofclimate change mitigationandclimate change adaptation: because society invested in built infrastructure, housing and other systems before the magnitude ofchanges anticipated by climate change. Retrofits to increasebuilding efficiency, for example, both help reduce the overall negative impacts of climate change byreducing building emissions and environmental impactswhile also allowing the building to be more healthy during extreme weather events. Retrofitting also is part of acircular economy, reducing the amount of newly manufactured goods, thus reducinglifecycleemissions and environmental impacts. Sustainable refurbishmentdescribes working on existing buildings to improve their environmental performance using sustainable methods and materials. A refurbishment or retrofit is defined as: "any work to a building over and above maintenance to change its capacity, function or performance' in other words, any intervention to adjust, reuse, or upgrade a building to suit new conditions or requirements".[1]Refurbishment can be done to a part of a building, an entire building, or a campus.[2]Sustainable refurbishment takes this a step further to modify the existing building to perform better in terms of its environmental impact and its occupants' environment. Most sustainable refubrishments are also green retrofits: any refurbishment of an existing building that aims to reduce thecarbon emissionsandenvironmental impactof the building.[3]This can include improving theenergy efficiencyof theHVACand other mechanical systems, increasing the quality of insulation in thebuilding envelope, implementingsustainable energygeneration, and aiming to improve occupant comfort and health.[4] Principally retrofitting describes the measures taken in the manufacturing industry to allow new or updated parts to be fitted to old or outdated assemblies (like blades to wind turbines).[9] Retrofitting parts are necessary for manufacture when the design of a large assembly is changed or revised. If, after the changes have been implemented, a customer (with an old version of the product) wishes to purchase a replacement part, then retrofit parts and assembling techniques will have to be used so that the revised parts will fit suitably onto the older assembly. Retrofitting is an important process used for valves and actuators to ensure optimal operation of an industrial plant. One example is retrofitting a 3-way valve into a 2-way valve, which results in closing one of the three openings to continue using the valve for certain industrial systems.[10] Retrofitting can improve a machine or system's overall functionality by using advanced and updated equipment and technology—such as integrating Human Machine Interfaces into older factories.[11] Car customizingis a form of retrofitting, where older vehicles are fitted with new technologies:power windows,cruise control,remote keyless systems, electricfuel pumps,driverless systems,[14][15]etc. Trucks[16]andagricultural machinescan also be given retrofits to make them driverless. Many naval vessels have undergone retrofitting and refitting, sometimes entire classes at once. For instance, theNew Threat Upgradeprogram of the US Navy saw many vessels retrofitted for improved anti-air capability. Naval vessels are often retrofit for one of three reasons: to incorporate new technology, to compensate for performance gaps or weaknesses in design, or to change the ship'sclassification. Militaries of the world are often ardent adopters of the latest technology, and many technological advances have been spurred by warfare, especially in fields such as radar and radio communications. Because of this, and the significant investment that a ship hull represents, it is common for retrofitting to be performed whenever new systems are developed. This may be as small as replacing one type of radio with another, or replacing out-dated cryptography equipment with more secure methods of communication, or as major as replacing entire guns and turrets, adding armor plate, or new propulsion systems. Other ships are retrofit to compensate for weaknesses perceived in their operational capabilities. This was the secondary purpose of the US Navy's New Threat Upgrade program, for instance. Major changes in doctrine or the art of warfare also necessitate changes, such as the anti-aircraft upgrades performed on many World War Two-era vessels as air power became a dominant part of naval strategy and tactics. Additionally, because of the investment a hull represents, few navies scrap front-line warships. Many times smaller ships are retrofitted for patrol, coast guard, or specialized roles when they are no longer fit for duty as part of a warfleet. The JapaneseMomi classfrom the interwar period, for example, was converted from destroyers to patrol boats in 1939, as they were no longer capable enough to serve in the role of destroyer. Other times classes are retrofit because they are no longer needed in warfare, due to changes in tactics. For instance, theUSSLangleywas an aircraft carrier converted from a collier (coal-carrying ship to supply coal-fired steamships with fuel) of the Jupiter-class. Because of the heavy use of retrofitting and refitting, fictional navies also include the concept. As an example, in the Star TrekMMORPGStar Trek Online players can purchase retrofitted ships of famous Star Trek ship classes, such as those crewed by the protagonists of the Star Trek TV series. This is done to allow players to pilot iconic ships from old series of the show, that wouldn't naturally be latest-and-greatest ships due to their obsolescence or size, but are retrofitted to be suitable for a maximum-level player-character admiral. The term is also used in the field ofenvironmental engineering, particularly to describeconstructionor renovation projects on previously built sites, to improvewater qualityin nearbystreams,riversorlakes. The concept has also been applied to changing the output mix ofenergyfrompower plantstocogenerationinurban areaswith a potential fordistrict heating. Sites with extensiveimpervious surfaces(such as parking lots and rooftops) can generate high levels ofstormwaterrunoffduring rainstorms, and this can damage nearby water bodies. These problems can often be addressed by installing new stormwater management features on the site, a process that practitioners refer to as stormwater retrofitting. Stormwater management practices used in retrofit projects includerain gardens,permeable pavingandgreen roofs.[17](See alsostream restoration.)
https://en.wikipedia.org/wiki/Retrofitting
Apatchisdatathat is intended to be used to modify an existing software resource such as aprogramor afile, often to fixbugsandsecurity vulnerabilities.[1][2]A patch may be created to improve functionality,usability, orperformance. A patch is typically provided by a vendor for updating the software that they provide. A patch may be created manually, but commonly it is created via a tool that compares two versions of the resource and generates data that can be used to transform one to the other. Typically, a patch needs to be applied to the specific version of the resource it is intended to modify, although there are exceptions. Some patching tools can detect the version of the existing resource and apply the appropriate patch, even if it supports multiple versions. As more patches are released, their cumulative size can grow significantly, sometimes exceeding the size of the resource itself. To manage this, the number of supported versions may be limited, or a complete copy of the resource might be provided instead. Patching allows for modifying acompiled(machine language) program when thesource codeis unavailable. This demands a thorough understanding of the inner workings of the compiled code, which is challenging without access to the source code. Patching also allows for making changes to a program without rebuilding it from source. For small changes, it can be more economical to distribute a patch than to distribute the complete resource. Although often intended to fix problems, a poorly designed patch can introduce new problems (seesoftware regressions). In some cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed.Patch managementis a part oflifecycle management, and is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Typically, a patch is applied viaprogrammed controltocomputer storageso that it is permanent. In some cases a patch is applied by aprogrammervia a tool such as adebuggertocomputer memoryin which case the change is lost when the resource is reloaded from storage. Patches forproprietary softwareare typically distributed asexecutable filesinstead ofsource code. When executed these files load a program into memory which manages the installation of the patch code into the target program(s) on disk. Patches for other software are typically distributed as data files containing the patch code. These are read by a patchutility programwhich performs the installation. This utility modifies the target program's executable file—the program'smachine code—typically by overwriting its bytes with bytes representing the new patch code. If the new code will fit in the space (number of bytes) occupied by the old code, it may be put in place by overwriting directly over the old code. This is called an inline patch. If the new code is bigger than the old code, the patch utility will append load record(s) containing the new code to the object file of the target program being patched. When the patched program is run, execution is directed to the new code with branch instructions (jumps or calls) patched over the place in the old code where the new code is needed. On early 8-bit microcomputers, for example the Radio ShackTRS-80, the operating system includes a PATCH/CMD utility which accepts patch data from a text file and applies the fixes to the target program's executable binary file(s). The patch code must have place(s) in memory to be executed at runtime. Inline patches are no difficulty, but when additional memory space is needed the programmer must improvise. Naturally if the patch programmer is the one who first created the code to be patched, this is easier. Savvy programmers plan in advance for this need by reserving memory for later expansion, left unused when producing their final iteration. Other programmers not involved with the original implementation, seeking to incorporate changes at a later time, must find or make space for any additional bytes needed. The most fortunate possible circumstance for this is when the routine to be patched is a distinct module. In this case the patch programmer need merely adjust the pointers or length indicators that signal to other system components the space occupied by the module; he is then free to populate this memory space with his expanded patch code. If the routine to be patched does not exist as a distinct memory module, the programmer must find ways to shrink the routine to make enough room for the expanded patch code. Typical tactics include shortening code by finding more efficient sequences of instructions (or by redesigning with more efficient algorithms), compacting message strings and other data areas, externalizing program functions to mass storage (such as disk overlays), or removal of program features deemed less important than the changes to be installed with the patch. Small in-memory machine code patches can be manually applied with the system debug utility, such asCP/M's DDT orMS-DOS's DEBUG debuggers. Programmers working in interpretedBASICoften used the POKE command to alter the functionality of a system service routine or the interpreter itself. Patches can also circulate in the form of source code modifications. In this case, the patches usually consist of textual differences between two source code files, called "diffs". These types of patches commonly come out ofopen-source software projects. In these cases, developers expect users to compile the new or changed files themselves. Because the word "patch" carries the connotation of a small fix, large fixes may use different nomenclature. Bulky patches or patches that significantly change a program may circulate as "service packs" or as "software updates".Microsoft Windows NTand its successors (includingWindows 2000,Windows XP,Windows VistaandWindows 7) use the "service pack" terminology.[3]Historically,IBMused the terms "FixPaks" and "Corrective Service Diskette" to refer to these updates.[4] Historically, software suppliers distributed patches onpaper tapeor onpunched cards, expecting the recipient to cut out the indicated part of the original tape (or deck), and patch in (hence the name) the replacement segment. Later patch distributions used magnetic tape. Then, after the invention of removable disk drives, patches came from the software developer via adiskor, later,CD-ROMviamail. With widely availableInternetaccess,downloadingpatches from the developer'sweb siteor through automated software updates became often available to the end-users. Starting with Apple'sMac OS 9and Microsoft'sWindows ME, PC operating systems gained the ability to get automatic software updates via the Internet. Computer programs can often coordinate patches to update a target program. Automation simplifies the end-user's task – they need only to execute an update program, whereupon that program makes sure that updating the target takes place completely and correctly. Service packs forMicrosoft Windows NTand its successors and for many commercial software products adopt such automated strategies. Some programs can update themselves via theInternetwith very little or no intervention on the part of users. The maintenance ofserversoftware and ofoperating systemsoften takes place in this manner. In situations where system administrators control a number of computers, this sort of automation helps to maintain consistency. The application of security patches commonly occurs in this manner. With the advent of larger storage media and higher Internet bandwidth, it became common to replace entire files (or even all of a program's files) rather than modifying existing files, especially for smaller programs. The size of patches may vary from a fewbytesto hundreds ofmegabytes; thus, more significant changes imply a larger size, though this also depends on whether the patch includes entire files or only the changed portion(s) of files. In particular, patches can become quite large when the changes add or replace non-program data, such as graphics and sounds files. Such situations commonly occur in the patching ofcomputer games. Compared with the initial installation of software, patches usually do not take long to apply. In the case ofoperating systemsandcomputer serversoftware, patches have the particularly important role of fixing security holes. Some critical patches involve issues with drivers.[5]Patches may require prior application of other patches, or may require prior or concurrent updates of several independent software components. To facilitate updates, operating systems often provide automatic or semi-automatic updating facilities. Completely automatic updates have not succeeded in gaining widespread popularity in corporate computing environments, partly because of the aforementioned glitches, but also because administrators fear that software companies may gain unlimited control over their computers.[citation needed]Package management systemscan offer various degrees of patch automation. Usage of completely automatic updates has become far more widespread in the consumer market, due largely[citation needed]to the fact thatMicrosoft Windowsadded support for them[when?], andService Pack 2 of Windows XP(available in 2004) enabled them by default. Cautious users, particularly system administrators, tend to put off applying patches until they can verify the stability of the fixes. Microsoft(W)SUSsupports this. In the cases of large patches or of significant changes, distributors often limit availability of patches to qualified developers as abeta test. Applying patches tofirmwareposes special challenges, as it often involves the provisioning of totally new firmware images, rather than applying only the differences from the previous version. The patch usually consists of a firmware image in form of binary data, together with a supplier-provided special program that replaces the previous version with the new version; amotherboardBIOSupdate is an example of a common firmware patch. Any unexpected error or interruption during the update, such as a power outage, may render the motherboard unusable. It is possible for motherboard manufacturers to put safeguards in place to prevent serious damage; for example, the update procedure could make and keep a backup of the firmware to use in case it determines that the primary copy is corrupt (usually through the use of achecksum, such as aCRC). Video gamesreceive patches to fix compatibility problems after their initial release just like any other software, but they can also be applied to change game rules oralgorithms. These patches may be prompted by the discovery ofexploitsin themultiplayergame experience that can be used to gain unfair advantages over other players. Extra features and gameplay tweaks can often be added. These kinds of patches are common infirst-person shooterswithmultiplayercapability, and inMMORPGs, which are typically very complex with large amounts of content, almost always rely heavily on patches following the initial release, where patches sometimes add new content and abilities available to players. Because the balance and fairness for all players of an MMORPG can be severely corrupted within a short amount of time by an exploit, servers of an MMORPG are sometimes taken down with short notice in order to apply a critical patch with a fix. Companies sometimes release games knowing that they have bugs.Computer Gaming World'sScorpiain 1994 denounced "companies—too numerous to mention—who release shoddy product knowing they can get by with patches and upgrades, and who make'pay-testers of their customers".[6] Patches sometimes become mandatory to fix problems withlibrariesor with portions ofsource codefor programs in frequent use or in maintenance. This commonly occurs on very large-scale software projects, but rarely in small-scale development. In open-source projects, the authors commonly receive patches or many people publish patches that fix particular problems or add certain functionality, like support for local languages outside the project's locale. In an example from the early development of theLinux kernel(noted for publishing its complete source code),Linus Torvalds, the original author, received hundreds of thousands of patches from manyprogrammersto apply against his original version. TheApache HTTP Serveroriginally evolved as a number of patches thatBrian Behlendorfcollated to improveNCSA HTTPd, hence a name that implies that it is a collection of patches ("a patchy server"). The FAQ on the project's official site states that the name 'Apache' was chosen from respect for the Native American Indian tribe ofApache. However, the 'a patchy server' explanation was initially given on the project's website.[7] A hotfix or Quick Fix Engineering update (QFE update) is a single, cumulative package that includes information (often in the form of one or more files) that is used to address a problem in a software product (i.e., a software bug). Typically, hotfixes are made to address a specific customer situation.Microsoftonce used this term but has stopped in favor of new terminology: General Distribution Release (GDR) and Limited Distribution Release (LDR).Blizzard Entertainment, however, defines a hotfix as "a change made to the game deemed critical enough that it cannot be held off until a regular content patch". A point release is aminor releaseof a software project, especially one intended to fix bugs or do small cleanups rather than add significantfeatures. Often, there are too many bugs to be fixed in a single major or minor release, creating a need for a point release. Program temporary fix or Product temporary fix (PTF), depending on date, is the standardIBMterminology for a single bug fix, or group of fixes, distributed in a form ready to install for customers. A PTF was sometimes referred to as a “ZAP”.[8]Customers sometime explain the acronym in a tongue-in-cheek manner aspermanent temporary fixor more practicallyprobably this fixes, because they have the option to make the PTF a permanent part of the operating system if the patch fixes the problem. Asecurity patchis a change applied to an asset to correct the weakness described by a vulnerability. This corrective action will prevent successful exploitation and remove or mitigate a threat's capability to exploit a specific vulnerability in an asset. Patch management is a part ofvulnerability management– the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities. Security patches are the primary method of fixing security vulnerabilities in software. Currently Microsoft releases its security patches once a month ("patch Tuesday"), and other operating systems and software projects have security teams dedicated to releasing the most reliable software patches as soon after a vulnerability announcement as possible. Security patches are closely tied toresponsible disclosure. These security patches are critical to ensure that business process does not get affected. In 2017, companies were struck by a ransomware calledWannaCrywhich encrypts files in certain versions ofMicrosoft Windowsand demands a ransom via BitCoin. In response to this, Microsoft released a patch which stops the ransomware from running. A service pack or SP or a feature pack (FP) comprises a collection of updates, fixes, or enhancements to a software program delivered in the form of a single installable package. Companies often release a service pack when the number of individual patches to a given program reaches a certain (arbitrary) limit, or the software release has shown to be stabilized with a limited number of remaining issues based on users' feedback and bug tracking such asBugzilla. In large software applications such as office suites, operating systems, database software, or network management, it is not uncommon to have a service pack issued within the first year or two of a product's release. Installing a service pack is easier and less error-prone than installing many individual patches, even more so when updating multiple computers over a network, where service packs are common. An unofficial patch is a patch for a program written by a third party instead of the originaldeveloper. Similar to an ordinary patch, it alleviatesbugsor shortcomings. Examples are security fixes by security specialists when an official patch by the software producers itself takes too long.[9][10]Other examples are unofficial patches created by thegame communityof avideo gamewhich became unsupported.[11][12] Monkey patchingmeans extending or modifying a program locally (affecting only the running instance of the program). Hot patching, also known aslive patchingordynamic software updating, is the application of patches without shutting down and restarting the system or the program concerned. This addresses problems related to unavailability of service provided by the system or the program.[13]Method can be used to updateLinux kernelwithout stopping the system.[14][15]A patch that can be applied in this way is called ahot patchor alive patch. This is becoming a common practice in the mobile app space.[16]Companies likeRollout.iousemethod swizzlingto deliver hot patches to the iOS ecosystem.[17]Another method for hot-patching iOS apps is JSPatch.[18] Cloud providers often use hot patching to avoid downtime for customers when updating underlying infrastructure.[19] In computing, slipstreaming is the act of integrating patches (includingservice packs) into theinstallationfiles of their original app, so that the result allows a direct installation of the updated app.[20][21] The nature of slipstreaming means that it involves an initial outlay of time and work, but can save a lot of time (and, by extension, money) in the long term. This is especially significant for administrators that are tasked with managing a large number of computers, where typical practice for installing an operating system on each computer would be to use the original media and then update each computer after the installation was complete. This would take a lot more time than starting with a more up-to-date (slipstreamed) source, and needing to download and install the few updates not included in the slipstreamed source. However, not all patches can be applied in this fashion and one disadvantage is that if it is discovered that a certain patch is responsible for later problems, said patch cannot be removed without using an original, non-slipstreamed installation source. Software update systems allow for updates to be managed by users and software developers. In the2017 Petya cyberpandemic, the financial software "MeDoc"'s update system is said to have been compromised to spreadmalwarevia its updates.[22][23]On the Tor Blog, cybersecurity expert Mike Perry states thatdeterministic, distributed builds are likely the only way to defend against malware that attacks the software development andbuildprocesses to infect millions of machines in a single, officially signed, instantaneous update.[24]Update managers also allow for security updates to be applied quickly and widely. Update managers ofLinuxsuch asSynapticallow users to update all software installed on their machine. Applications like Synaptic use cryptographic checksums to verify source/local files before they are applied to ensure fidelity against malware.[25][26] Somehackermay compromise legitimate software update channel and injectmalicious code.[27]
https://en.wikipedia.org/wiki/Patch_(computing)
Quiltis a software utility for managing a series of changes to thesource codeof anycomputer program. Such changes are often referred to as "patches" or "patch sets". Quilt can take an arbitrary number of patches as input and condense them into a single patch. In doing so, Quilt makes it easier for many programmers to test and evaluate the different changes amongst patches before they are permanently applied to the source code. Tools of this type are very important for distributedsoftware development, in which many programmers collaborate to test and build a single large codebase. For example, quilt is heavily used by the maintainers of theLinux kernel.[2] Quilt evolved from a set of patch-management scripts originally written byLinux kerneldeveloperAndrew Morton,[3]and was developed by Andreas Grünbacher for maintaining Linux kernel customizations forSuSE Linux.[4]It is now being developed as a community effort, hosted at theGNU Savannahproject repository and distributed asfree software(its license is theGNU General Public Licensev2, or later). Quilt's name originated frompatchwork quilts. Quilt has been incorporated intodpkg,[5]Debian's package manager, and is one of the standard source formats supported from the Debian "squeeze" release onwards. This source format is identified as "3.0 (quilt)" by dpkg. Quilt is integrated into theBuildroot, which is notably used byOpenWrt.[6]Quilt is also integrated into and supported by the similarYocto Projectbuild system supported by theLinux Foundation.[7] Mercurial queues (mq), an extension of theMercurialrevision control system, provides similar functionality;[8]and StGit provides an equivalent functionality on top ofGit.[9]Git itself has similar functionality since 2.38 with--update-refsoption togit rebase.[10]
https://en.wikipedia.org/wiki/Quilt_(software)
rsync(remote sync) is a utility fortransferringandsynchronizingfilesbetween a computer and a storage drive and acrossnetworkedcomputersby comparing themodification timesand sizes of files.[8]It is commonly found onUnix-likeoperating systemsand is under theGPL-3.0-or-laterlicense.[4][5][9][10][11][12] rsync is written inCas a single-threadedapplication.[13]The rsync algorithm is a type ofdelta encoding, and is used for minimizing network usage.Zstandard,LZ4, orZlibmay be used for additionaldata compression,[8]andSSHorstunnelcan be used for security. rsync is typically used for synchronizing files and directories between two different systems. For example, if the commandrsync local-file user@remote-host:remote-fileis run, rsync will use SSH to connect asusertoremote-host.[14]Once connected, it will invoke the remote host's rsync and then the two programs will determine what parts of the local file need to be transferred so that the remote file matches the local one. One application of rsync is the synchronization ofsoftware repositoriesonmirror sitesused bypackage management systems.[15][16] rsync can also operate in adaemonmode (rsyncd), serving and receiving files in the native rsync protocol (using thersync://syntax). Andrew TridgellandPaul Mackerraswrote the original rsync, which was first announced on 19 June 1996.[1]It is similar in function and invocation tordist(rdist -c), created byRalph Campbellin 1983 and released as part of4.3BSD.[17]Tridgell discusses the design, implementation, and performance of rsync in chapters 3 through 5 of his 1999Ph.D.thesis.[18]As of 2023[ref], it is maintained byWayne Davison.[2] Because of its flexibility, speed, and scriptability,rsynchas become a standard Linux utility, included in all popular Linux distributions.[citation needed]It has been ported to Windows (viaCygwin,Grsync, orSFU[19]),FreeBSD,[20]NetBSD,[21]OpenBSD,[22]andmacOS. Similar tocp,rcpandscp,rsyncrequires the specification of a source and a destination, of which at least one must be local.[23] Generic syntax: whereSRCis the file or directory (or a list of multiple files and directories) to copy from,DESTis the file or directory to copy to, and square brackets indicate optional parameters. rsynccan synchronize Unix clients to a central Unix server usingrsync/sshand standard Unix accounts. It can be used in desktop environments, for example to efficiently synchronize files with a backup copy on an external hard drive. A scheduling utility such ascroncan carry out tasks such as automated encryptedrsync-based mirroring between multiple hosts and a central server. A command line to mirrorFreeBSDmight look like:[24] TheApache HTTP Serversupports rsync only for updating mirrors.[25] The preferred (and simplest) way to mirror aPuTTYwebsite to the current directory is to use rsync.[26] A way to mimic the capabilities ofTime Machine (macOS);[27] Make a full backup of system root directory:[28] Delete all files and directories, within a directory, extremely fast: An rsync process operates by communicating with another rsync process, a sender and a receiver. At startup, an rsync client connects to a peer process. If the transfer is local (that is, between file systems mounted on the same host) the peer can be created with fork, after setting up suitable pipes for the connection. If a remote host is involved, rsync starts a process to handle the connection, typicallySecure Shell. Upon connection, a command is issued to start an rsync process on the remote host, which uses the connection thus established. As an alternative, if the remote host runs an rsync daemon, rsync clients can connect by opening a socket on TCP port 873, possibly using a proxy.[29] Rsync has numerous command line options and configuration files to specify alternative shells, options, commands, possibly with full path, and port numbers. Besides using remote shells, tunnelling can be used to have remote ports appear as local on the server where an rsync daemon runs. Those possibilities allow adjusting security levels to the state of the art, while a naive rsync daemon can be enough for a local network. One solution is the--dry-runoption, which allows users to validate theircommand-line argumentsand to simulate what would happen when copying the data without actually making any changes or transferring any data. By default, rsync determines which files differ between the sending and receiving systems by checking the modification time and size of each file. If time or size is different between the systems, it transfers the file from the sending to the receiving system. As this only requires reading file directory information, it is quick, but it will miss unusual modifications which change neither.[8] Rsync performs a slower but comprehensive check if invoked with--checksum. This forces a full checksum comparison on every file present on both systems. Barring rarechecksum collisions, this avoids the risk of missing changed files at the cost of reading every file present on both systems. The rsync utility uses analgorithminvented by Australian computer programmerAndrew Tridgellfor efficiently transmitting a structure (such as a file) across a communications link when the receiving computer already has a similar, but not identical, version of the same structure.[30] The recipient splits its copy of the file into chunks and computes twochecksumsfor each chunk: theMD5hash, and a weaker but easier to compute 'rolling checksum'.[31]It sends these checksums to the sender. The sender computes the checksum for each rolling section in its version of the file having the same size as the chunks used by the recipient's. While the recipient calculates the checksum only for chunks starting at full multiples of the chunk size, the sender calculates the checksum for all sections starting at any address. If any such rolling checksum calculated by the sender matches a checksum calculated by the recipient, then this section is a candidate for not transmitting the content of the section, but only the location in the recipient's file instead. In this case, the sender uses the more computationally expensive MD5 hash to verify that the sender's section and recipient's chunk are equal. Note that the section in the sender may not be at the same start address as the chunk at the recipient. This allows efficient transmission of files which differ by insertions and deletions.[32]The sender then sends the recipient those parts of its file that did not match, along with information on where to merge existing blocks into the recipient's version. This makes the copies identical. Therolling checksumused in rsync is based on Mark Adler'sadler-32checksum, which is used inzlib, and is itself based onFletcher's checksum. If the sender's and recipient's versions of the file have many sections in common, the utility needs to transfer relatively little data to synchronize the files. If typicaldata compressionalgorithms are used, files that are similar when uncompressed may be very different when compressed, and thus the entire file will need to be transferred. Some compression programs, such asgzip, provide a special "rsyncable" mode which allows these files to be efficiently rsynced, by ensuring that local changes in the uncompressed file yield only local changes in the compressed file. Rsync supports other key features that aid significantly in data transfers or backup. They include compression and decompression of data block by block usingZstandard,LZ4, orzlib, and support for protocols such assshandstunnel. Therdiffutility uses the rsync algorithm to generatedelta fileswith the difference from file A to file B (like the utilitydiff, but in a different delta format). The delta file can then be applied to file A, turning it into file B (similar to thepatchutility). rdiff works well withbinary files. Therdiff-backupscript maintains abackupmirror of a file or directory either locally or remotely over the network on another server. rdiff-backup stores incremental rdiff deltas with the backup, with which it is possible to recreate any backup point.[33] Thelibrsynclibrary used by rdiff is an independent implementation of the rsync algorithm. It does not use the rsync network protocol and does not share any code with the rsync application.[34]It is used byDropbox, rdiff-backup,duplicity, and other utilities.[34] Theacrosynclibrary is an independent, cross-platform implementation of the rsync network protocol.[35]Unlike librsync, it is wire-compatible with rsync (protocol version 29 or 30). It is released under theReciprocal Public Licenseand used by the commercial rsync softwareAcrosync.[36] Theduplicitybackup software written inpythonallows for incremental backups with simple storage backend services like local file system,sftp,Amazon S3and many others. It utilizes librsync to generate delta data against signatures of the previous file versions, encrypting them usinggpg, and storing them on the backend. For performance reasons a local archive-dir is used to cache backup chain signatures, but can be re-downloaded from the backend if needed. As of macOS 10.5 and later, there is a special-Eor--extended-attributesswitch which allows retaining much of theHFS+file metadata when syncing between two machines supporting this feature. This is achieved by transmitting theResource Forkalong with the Data Fork.[37] zsyncis an rsync-like tool optimized for many downloads per file version. zsync is used by Linux distributions such asUbuntu[38]for distributing fast changing betaISO imagefiles. zsync uses the HTTP protocol and .zsync files with pre-calculated rolling hash to minimize server load yet permit diff transfer for network optimization.[39] Rcloneis an open-source tool inspired by rsync that focuses on cloud and other high latency storage. It supports more than 50 different providers and provides an rsync-like interface for cloud storage.[40]However, Rclone does not support rolling checksums for partial file syncing (binary diffs) because cloud storage providers do not usually offer the feature and Rclone avoids storing additional metadata.[41]
https://en.wikipedia.org/wiki/Rsync
Xdelta[3]is a command line tool fordelta encoding, which stores or transmits the difference (deltas) between sequential data, instead of entire files. This is similar todiffandpatch, except diff computes and shows the difference between two complete files, while patch is primarily designed for human-readable text files; Xdelta is designed forbinary filesand does not generate human readable output. Xdelta was first released sometime before October 12, 1997[4]by Joshua MacDonald, who currently maintains the program. The algorithm of xdelta1 was based on the algorithm ofrsync, developed byAndrew Tridgell, though it uses a smaller block size.[citation needed] Xdelta version 3 is primarily designed to work with streams following the standardizedVCDIFFformat, and it realized the compatibility among other delta encoding software which supports the VCDIFF format.[citation needed]It runs onUnix-likeoperating systems andMicrosoft Windows. xdelta can handle up to 264byte files,[5][failed verification]and it is suitable for large backups. Thisfree and open-source softwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Xdelta
This article discusses support programs included in or available forOS/360 and successors. IBM categorizes some of these programs as utilities[1][a]and others as service aids;[2]the boundaries are not always consistent or obvious. Many, but not all, of these programs match the types inutility software. The following lists describe programs associated withOS/360 and successors. NoDOS,TPForVMutilities are included. Many of these programs were designed by IBM users, through the groupSHARE, and then modified or extended by IBM from versions originally written by a user. These programs are usually invoked viaJob Control Language(JCL). They tend to use common JCL DD identifiers (in the OS, nowz/OSoperating systems) for their data sets: IDCAMS ("Access Method Services") generates and modifiesVirtual Storage Access Method(VSAM) and Non-VSAM datasets. IDCAMS was introduced along with VSAM inOS/VS; the "Access Method" reference derives from the initial "VSAM replaces all other access methods" mindset of OS/VS. IDCAMS probably has the most functionality of all the utility programs, performing many functions, for both VSAM and non-VSAM files. The following example illustrates the use of IDCAMS to copy a dataset to disk. The dataset has 80-byte records, and the system will choose the block size for the output: In the example above, SYSIN control cards are coming from an in-stream file, but you can instead point to any sequential file or a PDS member containing control cards or a temporary data-set, if you wish. Example of using SYSIN files would be something like this: or this: IEBCOMPR compares records in sequential or partitioneddata sets. The IEBCOMPR utility is used to compare twosequentialorpartitioned datasets. This data set comparison is performed at the logical record level. Therefore, IEBCOMPR is commonly used to verify that a backup copy of a data set is correct (exact match to the original). During processing, IEBCOMPR compares each record from each data set, one by one. If the records are unequal, IEBCOMPR lists the following information in the SYSOUT: When comparing sequential data sets, IEBCOMPR considers the data sets equal if the following conditions are met: Forpartitioned data sets, IEBCOMPR considers the data sets equal if the following conditions are met: If ten unequal comparisons are encountered during processing, IECOMPR terminates with the appropriate message. Note: IEBCOMPR is not a very flexible or user-friendly compare program. It can't restrict the comparison to only certain columns, it can't ignore differences in white space, it doesn't tell you where in the record the difference occurs, and it halts after 10 differences. On the other hand, it is fast, and it is present on all IBM mainframes. So it is very useful when an exact match is expected, such as comparing load modules that have not been reblocked, or checking that a copy worked properly. For comparisons of programs or reports, theISPFSuperC (ISRSUPC) compare program is often used instead. IEBCOPY copies, compresses and merges partitioneddata sets. It can also select or exclude specified members during the copy operation, and rename or replace members. Some of the tasks that IEBCOPY can perform include the following: For the IEBCOPYutility, the required job control statements for a copy are as follows: The MYDD1 and MYDD2 DD statements are names chosen by the user for the partitioned input and output data sets, respectively; The defaults are SYSUT1 and SYSUT2. You can use any valid DDNAME for these two DD statements. These DDNAMEs are specified in the utility control statements to tell IEBCOPY the name of the input and output data sets. You only need one DD statement for a PDS to be compressed. IEBDG ('Data Generator') creates test datasets consisting of patterned data. Control statements define the fields of the records to be created, including position, length, format, and initialization to be performed. IEBDG can use an existing dataset as input and change fields as specified in the control statements, for example replacing a name field by random alphabetic text. The contents of each field may be varied for each record, for example by rotating the characters in an alphanumeric field left or right for each subsequent record. Example: IEBEDIT selectively copies portions of JCL. An example of an IEBEDIT program: In this example, data set xxxxx.yyyyy.zzzzz should contain job(s) (which should include steps named STEP5, STEP10, and STEP15). This IEBEDIT routine copies the selected steps of the job onto the SYSUT2 output file (in this example, the internal reader). The syntax of the EDIT statement is: START=jobnamespecifies the name of the input job to which the EDIT statement applies. Each EDIT statement must apply to a separate job. If START is specified without TYPE and STEPNAME, the JOB statement and all job steps for the specified job are included in the output. Default: If START is omitted and only one EDIT statement is provided, the first job encountered in the input data set is processed. If START is omitted from an EDIT statement other than the first statement, processing continues with the next JOB statement found in the input data set. TYPE={POSITION|INCLUDE|EXCLUDE}specifies the contents of the output data set. These values can be coded: POSITIONspecifies that the output is to consist of a JOB statement, the job step specified in the STEPNAME parameter, and all steps that follow that job step. All job steps preceding the specified step are omitted from the operation. POSITION is the default. INCLUDEspecifies that the output data set is to contain a JOB statement and all job steps specified in the STEPNAME parameter. EXCLUDEspecifies that the output data set is to contain a JOB statement and all job steps belonging to the job except those steps specified in the STEPNAME parameter. STEPNAME=(namelist)specifies the names of the job steps that you want to process. namelistcan be a single job step name, a list of step names separated by commas, or a sequential range of steps separated by a hyphen (for example, STEPA-STEPE). Any combination of these may be used in one namelist. If more than one step name is specified, the entire namelist must be enclosed in parentheses. When coded with TYPE=POSITION, STEPNAME specifies the first job step to be placed in the output data set. Job steps preceding this step are not copied to the output data set. When coded with TYPE=INCLUDE or TYPE=EXCLUDE, STEPNAME specifies the names of job steps that are to be included in or excluded from the operation. For example, STEPNAME=(STEPA,STEPF-STEPL,STEPZ) indicates that job steps STEPA, STEPF through STEPL, and STEPZ are to be included in or excluded from the operation. If STEPNAME is omitted, the entire input job whose name is specified on the EDIT statement is copied. If no job name is specified, the first job encountered is processed. NOPRINTspecifies that the message data set is not to include a listing of the output data set. Default: The resultant output is listed in the message data set. See here for more info:[1] IEBGENER copies records from a sequential dataset, or creates a partitioned dataset. Some of the tasks that IEBGENER can perform include the following: An example of an IEBGENER program to copy one dataset to another: For straight copy tasks, thesortprogram can often do these faster than IEBGENER. Thus many mainframe shops make use of an option that automatically routes such tasks to the sort ICEGENER program instead of IEBGENER. On some systems it is possible to sendemailfrom a batch job by directing the output to the "SMTP"external writer. On such systems, the technique is as follows: It is also possible to attach files while sending the email from Mainframe. IEBIMAGE manipulates several types of definitions (AKAimages) for the IBM 3211 printer, IBM 3800 laser printing subsystem and the IBM 4248 printer. Common uses are for forms control buffers (FCBs), character arrangement tables, character definitions and images of forms to be printed on the output along with the text, for company logos to be printed on the page, or just to print 'graybar' pages (alternating gray & white horizontal backgrounds, to match the previousgreenbar paper). With this utility, many different forms or logos could be stored as images, and printed when needed, all using the same standard blank paper, thus eliminating the need to stock many preprinted forms, and the need for operators to stop the printer and change paper. IEBISAM unloads, loads, copies and printsISAMdatasets. Extracted from IBM manual SC26-7414-08 z/OS DFSMSdfp Utilities: The IEBISAM program is no longer distributed. Starting in z/OS V1R7, ISAM data sets can no longer be processed (created, opened, copied or dumped). ISAM data sets that are still in use must be converted to VSAM key-sequenced data sets. Prior to z/OS V1R7, you could use access method services to allocate a VSAM key-sequenced data set and copy an ISAM data set into it. IEBPTPCH ("PrinT and PunCH") prints or punches records from a sequential or partitioned dataset. Some of the tasks that IEBPTPCH can perform include the following: Empty dataset check:If dataset to be checked is empty then RC=4 else 0. Read records from a 2495 Tape Cartridge Reader. Changes records in a sequential dataset or in a member of a partitioned dataset, replaced by, but not compatible with, IEBUPDTE. IEBUPDTE ("UPDaTE") incorporates changes to sequential or partitioned datasets. The UNIXpatchutility is a similar program, but uses different input format markers (e..g, "./ INSERT ..." in MVS becomes "@@..." in Unix Patch). Some programmers pronounce it "I.E.B. up-ditty". The IEBUPDTE utility is used to maintain source libraries. Some of the functions that IEBUPDTE can perform include the following: IEBUPDTE is commonly used to distribute source libraries from tape toDASD. IEBUPDTE uses the same job control statements required by most IEB-type utilities. The only exceptions are as follow: The job control used by IEUPDTE are as follows: IEFBR14is a dummy program, normally inserted in JCL when the only desired action is allocation or deletion of datasets. An example of anIEFBR14step: The calling sequence for OS/360 contained thereturn addressin Register 14. A branch to Register 14 would thus immediately exit the program. However, before and after executing this program, the operating system would allocate & deallocate datasets as specified in the DD statements, so it is commonly used as a quick way to set up or remove datasets. It consisted initially as a single instruction a "Branch to Register" 14. The mnemonic used in the IBMAssemblerwas BR and hence the name: IEF BR 14. IEF is the "prefix" of OS/360's "job management" subsystem. This single instruction program had an error in it — it didn't set the return code. Hence a second instruction had to be added to clear the return code so that it would exit with the correct status. There was an additional error reported and fixed by IBM on this now two instruction program. This error was due to the IEFBR14 program not being link-edited as reenterable (simultaneously usable by more than one caller). Some hackers have taken IEFBR14 and changed the BR 14 instruction to BR 15, thereby creating "the shortest loop in the world", as register 15 contains the address of the IEFBR14 module itself, and a BR 15 instruction would simply re-invoke the module, forever. These utilities are normally used bysystems programmersin maintaining the operation of the system, rather than by programmers in doing application work on the system. ICKDSF ("Device Support Facility") installs, initializes and maintains DASD, either under an operating system, orstandalone. Assign alternate tracks to defective tracks. IEHDASDR[1]: 161–187can performs several operations fordirect access storage devices[b](DASD) IBM eventually stopped adding support for new device types to IEHDASDR and directed customers to the free DSF for initializing volumes and to the chargeable DASDR (5740-UT1) and Data Facility/Data Set Services (5740-UT3, DF/DSS) for dump/restore. IBM removed IEHDASDR in MVS/XA.[3] IEHINITT ("INITialize Tape") initializes tapes by writing tape labels. Multiple tapes may be labeled in one run of the utility. IBM standard orASCIIlabels may be written. An example of an IEHINITT program: This example will label 3 tapes on a 3490 magnetic tape unit. Each tape will receive an IBM standard label. The VOLSER will be incremented by one for each tape labeled. Each tape will be rewound and unloaded after being labeled. IEHIOSUP updates relative track addresses (TTR) links for type IVSupervisor Call(SVC) routines in SYS1.SVCLIB. IEHIOSUP is no longer supported in OS/VS2 and later.[4] OPEN, CLOSE, and EOV functions are performed by a series of SVC modules that execute sequentially. Some modules contain tables used by theXCTLmacro to link to the next in the series. For performance reasons, to avoid a directory search each time, these tables contain the disk addresses of the modules rather than the names. Updates to SYS1.SVCLIB may cause these addresses to change, so IEHIOSUP needs to be run to install the correct addresses.[5] This is an example of the JCL required to run IEHIOSUP.[1] IEHLIST is a utility used to list entries in a Partitioned Dataset (PDS) directory or to list the contents of a Volume Table of Contents (VTOC). The IEHLIST utility is used to list the entries contained in any one of the following: An example of an IEHLIST program: This job will produce a formatted listing of the PDS directory of the PDS named xxxx.yyyy.zzzz. An example of an IEHLIST program to list a VTOC is very similar: IEHMOVE moves or copies collections of data. However, DFSMS (System Managed Storage) environments are now common, and IBM does not recommend using the IEHMOVE utility in those. A move differs from a copy in that following a move the original data set is deleted, or scratched. Some of the tasks that IEHMOVE can perform include the following: On the surface, IEHMOVE may seen redundant to the IEBGENER and IEBCOPY utilities. However, IEHMOVE is more powerful. The main advantage of using IEHMOVE is that you do not need to specify space orDCBinformation for the new data sets. This is because IEHMOVE allocates this information based on the existing data sets. Another advantage of IEHMOVE is that you can copy or move groups of data sets as well as entire volumes of data. Because of the ease in moving groups of data sets or volumes, the IEHMOVE utility is generally favored bysystems programmers. A sample IEHMOVE job: The DD statements for IEHMOVE, other than SYSPRINT and SYSIN, refer to DASD ormagnetic tapevolumes instead of individualdata sets. However, referencing volumes can pose a problem, since specifyingDISP=OLDgains exclusive access to a volume. Therefore, while your IEHMOVE job runs, that entire volume (and all datasets on it) is unavailable to other users. This is acceptable for private volumes, such as tape or mountable DASD volumes, but unacceptable public volumes. The SYSUT1 DD statement specifies a DASD volume where three work data set required by IEHMOVE are allocated. You must specify unit and volume information for this DD statement. IEHMOVE was one of the first systems to be developed inPL/S. In this example, three sequential data sets (SEQSET1, SEQSET2, and SEQSET3) are moved from one disk volume to three separate disk volumes. Each of the three receiving volumes is mounted when it is required by IEHMOVE. The source data sets are not cataloged. Space is allocated by IEHMOVE. IEHPROGM builds and maintains system control data. It is also used for renaming and scratching (deleting) a data set. Some of the tasks that IEHPROGM can perform include the following: For cataloging: Select and formatSMFrecords for tape errors. These programs do not run under the control of an operating system Format direct access volumes and assign alternate tracks. Dump and restore direct access volumes. Assign alternate tracks, recover and replace data. Load Forms Control Buffer (FCB) and Universal Character Set (UCS) buffer on printer. These are utility program that IBM documents in service aids or diagnosis[6]manuals. The original OS/360 Service aids had names beginning with IFC and IM*, but IBM changed the naming convention to HM* forOS/VS1and to AM* forOS/VS2. IBM did not change the IFC convention. Initializes the SYS1.LOGREC data set. Summarizes and prints records from the SYS1.LOGREC error recording data set. Traces selected system events such as SVC and I/O interruptions. Generates JCL needed to apply to a PTF and/or applies the PTF. The functions of this program have been subsumed bySMP. Verifies and/or replaces instructions and/or data in a load module, program object, or disk file. Formats and prints object modules, load modules, program objects and CSECT identification records. Maps load modules. The functions of this program have been subsumed by IMBLIST. Stand-alone program to format and print the system job queue. Not applicable toMVS. Format and print the system job queue. Not applicable toMVS. Formats and printscore dumps,TSOswap data set, and GTF trace data. Stand-alone program to produce a high-speed or low-speed dump of main storage. TheSort/Mergeutility is a program which sorts records in a file into a specified order, or merge pre-sorted files. It is very frequently used; often the most commonly used application program in a mainframe shop. Modern sort/merge programs also can select or omit certain records, summarize records, remove duplicates, reformat records, and produce simple reports. Sort/merge is important enough that there are multiple companies each selling their own sort/merge package for IBM mainframes. IBM's original OS/360 sort/merge program, 360S-SM-023, program name IERRCO00 (alias SORT), supported only IBM's first-generationdirect-access storage devices(DASD)[d]and tapes (2400). Support for second-generation disk drives was provided by IBMprogram productssuch as 5734-SM1 and the later 5740-SM1 (DFSORT, alias ICEMAN, also SORT). SORT is frequently executed as a stand-alone program, where it normally reads input from a file identified by DDSORTINand writes sorted output to a file identified by DDSORTOUT. It is also often called from another application, via theCOBOLSORTverb or calls toPL/IPLISRTxroutines, where it may use eitherSORTINorSORTOUTfiles or be passed records to be sorted by the caller and/or pass sorted records back to the caller one at a time. The operation of SORT is directed by control statements, which are largely compatible among various IBM and third-party sort programs. TheSORTorMERGEstatement defines thesort keys— the fields on which the data is to be sorted or merged. This statement identifies the position, length, and data type of each key. TheRECORDstatement describes the format and length of the records in the input file. Other statements allow the user to specify which records should be included or excluded from the sort and specify other transformations to be performed on the data. Keys can be any combination ofEBCDICorASCIIcharacter data, zoned or packed-decimal, signed or unsigned fixed-point binary, or hexadecimal floating-point. Keys can be located anywhere in the record and do not have to be contiguous. Sorting can be specified on any combination of ascending and descending sequence by key.[7] The OS/360 sort program, IERRCO00, operates by dividing the input data into sections, sorting each section in main memory, and writing the sorted section to intermediate datasets on eitherdirect-access storage devices(DASD) ormagnetic tape. Final merge phases then merge the sections to produce the sorted output. SORT uses one of a number of techniques for distributing the sections among secondary storage devices. Usually SORT can choose the optimal technique, but this can be overridden by the user.[8]SORT has three techniques that can be used if the intermediate storage is tape, and two if disk.[9] The tape techniques are: The disk techniques are: OS/360 had only the Linkage editor, available in several configurations. DFSMSdfp added the Binder as an alternatives for load modules, and as the only option for program objects. The Linkage editor creates and replaces load modules in apartitioned data setfrom a combination of control cards, object modules other load modules. It can rename or replace a control section (CSECT) and perform several other miscellaneous functions. It was originally available in several configurations depending on storage requirement, but the E level Linkage Editor is no longer available and the F level Linkage Editor is now known simply as the Linkage Editor. Inz/OSthe Linkage Editor is only present for compatibility. The binder, added inDFSMS, performs the same functions as the Linkage Editor. In addition, it supports a new format, the program object, which is the functional equivalent of a load module inPartitioned Data Set Extended(PDSE), with many additional capabilities. Oneassemblerwas usually standard, because it was required forsystem generation(SYSGEN) and customization. Assembler (E) was intended for OS/360 running in very small machines. Assembler (F) was intended for normal OS/360 installations. Assembler (XF) was the system assembler for OS/VS1 and OS/VS2, replacing Assembler (E) and (F), although it was not fully compatible with them. IBM soon made Assembler (XF) the system assembler for DOS and VM as well. Assembler (H) and Assembler (H) Version 2 are program product assemblers that are generally faster than Assemblers E, F, and XF, although not fully compatible with any of them. IBM High Level Assembler(HLASM) is essentially a new version of Assembler (H) Version 2 and is the only assembler that IBM supports on z/OS and z/VM. It replaces all of the older assemblers, although it is not fully compatible with them. Eachprogramming languageused in a computer shop will have one or more associatedcompilersthat translate a source program into a machine-language object module. Then the object module from the compiler must be processed by the linkage editor, IEWL, to create an executable load module. IGYCRCTL is a common example of a compiler; it is the compiler for the current IBM EnterpriseCOBOLfor z/OS product. (There have been several previous IBM COBOL compilers over the years, with different names, although users might provide an aliasCOBOLfor the current version.) There are many other compilers for various other programming languages. Compilers available from IBM includedALGOL, COBOL,FORTRAN,PL/I, andRPG. System Modification Program(SMP) is the vehicle for installing service on OS/360 and successors, replacing, e.g., stand-alone assembly, link edit and IMAPTFLE jobs. Originally an optional facility, it is mandatory for MVS/SP and later, and the program product version, SMP/E, is included in the more recent systems, e.g., z/OS.
https://en.wikipedia.org/wiki/IBM_mainframe_utility_programs#IEBUPDTE
Software portabilityis a design objective forsource codeto be easily made to run on differentplatforms. An aid to portability is the generalizedabstractionbetween the application logic andsystem interfaces. When software with the same functionality is produced for severalcomputing platforms, portability is the key issue for development cost reduction. Software portability may involve: Whenoperating systemsof the same family are installed on two computers withprocessorswith similarinstruction setsit is often possible to transfer the files implementing program files between them. In the simplest case, the file or files may simply be copied from one machine to the other. However, in many cases, the software isinstalledon a computer in a way which depends upon its detailed hardware, software, and setup, withdevice driversfor particular devices, using installed operating system and supporting software components, and using differentdrivesordirectories. In some cases, software, usually described as "portable software", is specifically designed to run on different computers with compatible operating systems and processors, without any machine-dependent installation. Porting is no more than transferring specified directories and their contents. Software installed on portablemass storagedevices such asUSB stickscan be used on any compatible computer on simply plugging the storage device in, and stores all configuration information on the removable device. Hardware- and software-specific information is often stored inconfiguration filesin specified locations such as theregistryonWindows). Software which is not portable in this sense must be modified much more to support the environment on the destination machine. As of 2011 the majority of desktop and laptop computers usedmicroprocessorscompatible with the 32- and 64-bitx86instruction sets. Smaller portable devices use processors with different and incompatible instruction sets, such asARM. The difference between larger and smaller devices is such that detailed software operation is different; an application designed to display suitably on a large screen cannot simply be ported to a pocket-sized smartphone with a tiny screen even if the functionality is similar. Web applicationsare required to be processor independent, so portability can be achieved by using web programming techniques, writing inJavaScript. Such a program can run in a common web browser. Suchweb applicationsmust, for security reasons, have limited control over the host computer, especially regarding reading and writing files. Non-web programs, installed upon a computer in the normal manner, can have more control, and yet achieve system portability by linking to portable libraries providing the same interface on different systems. Software can be compiled andlinkedfrom source code for different operating systems and processors if written in a programming language supporting compilation for the platforms. This is usually a task for the program developers; typical users have neither access to the source code nor the required skills. Inopen-sourceenvironments such as Linux the source code is available to all. In earlier days source code was often distributed in a standardised format, and could be built into executable code with a standardMake toolfor any particular system by moderately knowledgeable users if no errors occurred during the build. SomeLinux distributionsdistribute software to users in source form. In these cases there is usually no need for detailed adaptation of the software for the system; it is distributed in a way whichmodifies the compilation process to match the system. Even with seemingly portable languages likeCandC++, the effort to port source code can vary considerably. The authors ofUNIX/32V(1979) reported that "[t]he(Bourne) shell[...] required by far the largest conversion effort of any supposedly portable program, for the simple reason that it is not portable."[1] Sometimes the effort consists of recompiling the source code, but sometimes it is necessary to rewrite major parts of the software. Many language specifications describe implementation defined behaviour (e.g. right shifting a signed integer in C can do alogicalor anarithmeticshift). Operating system functions or third party libraries might not be available on the target system. Some functions can be available on a target system, but exhibit slightly different behavior such asutime()fails under Windows with EACCES, when it is called for a directory). The program code can contain unportable things, like the paths of include files, drive letters, or the backslash. Implementation defined things likebyte orderand the size of anintcan also raise the porting effort. In practice the claim of languages, likeCandC++, to have the WOCA (write once, compile anywhere) is arguable.
https://en.wikipedia.org/wiki/Software_portability
Withincomputing,cross-platform software(also calledmulti-platform software,platform-agnostic software, orplatform-independent software) iscomputer softwarethat is designed to work in severalcomputing platforms.[1]Some cross-platform software requires a separate build for each platform, but some can be directly run on any platform without special preparation, being written in aninterpreted languageor compiled to portablebytecodefor which theinterpretersor run-time packages are common or standard components of all supported platforms.[2] For example, a cross-platformapplicationmay run onLinux,macOSandMicrosoft Windows. Cross-platform software may run on many platforms, or as few as two. Some frameworks for cross-platform development areCodename One, ArkUI-X,Kivy,Qt,GTK,Flutter,NativeScript,Xamarin,Apache Cordova,Ionic, andReact Native.[3] Platformcan refer to the type of processor (CPU) or other hardware on which anoperating system(OS) orapplicationruns, the type of OS, or a combination of the two.[4]An example of a common platform isAndroidwhich runs on theARM architecture family. Other well-known platforms areLinux/Unix,macOSandWindows, these are all cross-platform.[4]Applications can be written to depend on the features of a particular platform—either the hardware, OS, orvirtual machine(VM) it runs on. For example, theJava platformis a common VM platform which runs on many OSs and hardware types. A hardware platform can refer to aninstruction set architecture. For example: ARM or the x86 architecture. These machines can run different operating systems. Smartphones and tablets generally run ARM architecture, these often run Android or iOS and othermobile operating systems. Asoftware platformcan be either anoperating system(OS) orprogramming environment, though more commonly it is a combination of both. An exception isJava, which uses an OS-independentvirtual machine(VM) to executeJava bytecode. Some software platforms are: TheJava languageis typically compiled to run on a VM that is part of the Java platform. TheJava virtual machine(Java VM, JVM) is a CPU implemented in software, which runs all Java code. This enables the same code to run on all systems that implement a JVM. Java software can be executed by a hardware-basedJava processor. This is used mostly in embedded systems. Java code running in the JVM has access to OS-related services, like diskinput/output(I/O) and network access, if the appropriate privileges are granted. The JVM makes the system calls on behalf of the Java application. This lets users to decide the appropriate protection level, depending on anaccess-control list(ACL). For example, disk and network access is usually enabled for desktop applications, but not for browser-basedapplets. TheJava Native Interface(JNI) can also be used to access OS-specific functions, with a loss of portability. Currently, Java Standard Edition software can run on Microsoft Windows, macOS, several Unix-like OSs, and severalreal-time operating systemsfor embedded devices. For mobile applications, browser plugins are used for Windows and Mac based devices, and Android has built-in support for Java. There are also subsets of Java, such asJava CardorJava Platform, Micro Edition, designed for resource-constrained devices. For software to be considered cross-platform, it must function on more than onecomputer architectureor OS. Developing such software can be a time-consuming task because different OSs have differentapplication programming interfaces(API). Software written for one OS may not automatically work on all architectures that OS supports. Just because software is written in a popularprogramming languagesuch asCorC++, it does not mean it will run on all OSs that support that language—or even on different versions of the same OS. Web applicationsare typically described as cross-platform because, ideally, they are accessible from anyweb browser: the browser is the platform. Web applications generally employ aclient–server model, but vary widely in complexity and functionality. It can be hard to reconcile the desire for features with the need for compatibility. Basic web applications perform all or most processing from astateless server, and pass the result to the client web browser. All user interaction with the application consists of simple exchanges of data requests and server responses. This type of application was the norm in the early phases ofWorld Wide Webapplication development. Such applications follow a simpletransactionmodel, identical to that of servingstatic web pages. Today, they are still relatively common, especially where cross-platform compatibility and simplicity are deemed more critical than advanced functionality. Prominent examples of advanced web applications include the Web interface toGmailandGoogle Maps. Such applications routinely depend on additional features found only in the more recent versions of popular web browsers. These features includeAjax,JavaScript,Dynamic HTML,SVG, and other components ofrich web applications. Because of the competing interests of compatibility and functionality, numerous design strategies have emerged. Many software systems use a layered architecture where platform-dependent code is restricted to the upper- and lowermost layers. Graceful degradation attempts to provide the same or similar functionality to all users and platforms, while diminishing that functionality to a least common denominator for more limited client browsers. For example, a user attempting to use a limited-feature browser to access Gmail may notice that Gmail switches to basic mode, with reduced functionality but still of use. Some software is maintained in distinct codebases for different (hardware and OS) platforms, with equivalent functionality. This requires more effort to maintain the code, but can be worthwhile where the amount of platform-specific code is high. This strategy relies on having one codebase that may be compiled to multiple platform-specific formats. One technique isconditional compilation. With this technique, code that is common to all platforms is not repeated. Blocks of code that are only relevant to certain platforms are made conditional, so that they are onlyinterpretedorcompiledwhen needed. Another technique is separation of functionality, which disables functionality not supported by browsers or OSs, while still delivering a complete application to the user. (See also:Separation of concerns.) This technique is used in web development where interpreted code (as in scripting languages) can query the platform it is running on to execute different blocks conditionally.[6] Third-party libraries attempt to simplify cross-platform capability by hiding the complexities of client differentiation behind a single, unified API, at the expense ofvendor lock-in. Responsive web design(RWD) is a Web design approach aimed at crafting the visual layout of sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices, from mobile phones to desktop computer monitors. Little or no platform-specific code is used with this technique. Cross-platform applications need much moreintegration testing. Some web browsers prohibit installation of different versions on the same machine. There are several approaches used to target multiple platforms, but all of them result in software that requires substantial manual effort for testing and maintenance.[7]Techniques such asfull virtualizationare sometimes used as a workaround for this problem. Tools such as the Page Object Model allow cross-platform tests to be scripted so that one test case covers multiple versions of an app. If different versions have similar user interfaces, all can be tested with one test case. Web applications are becoming increasingly popular but many computer users still use traditional application software which does not rely on a client/web-server architecture. The distinction between traditional and web applications is not always clear. Features, installation methods and architectures for web and traditional applications overlap and blur the distinction. Nevertheless, this simplifying distinction is a common and useful generalization. Traditional application software has been distributed as binary files, especiallyexecutable files. Executables only support the platform they were built for—which means that a single cross-platform executable could be very bloated with code that never executes on a particular platform. Instead, generally there is a selection of executables, each built for one platform. For software that is distributed as a binary executable, such as that written in C or C++, there must be asoftware buildfor each platform, using a toolset that translates—transcompiles—a single codebase into multiple binary executables. For example,Firefox, an open-source web browser, is available on Windows, macOS (bothPowerPCand x86 through whatApple Inc.calls aUniversal binary), Linux, and BSD on multiple computer architectures. The four platforms (in this case, Windows, macOS, Linux, and BSD) are separate executable distributions, although they come largely from the samesource code. In rare cases, executable code built for several platforms is combined into a single executable file called afat binary. The use of different toolsets may not be enough to build a working executables for different platforms. In this case, programmers mustportthe source code to the new platform. For example, an application such as Firefox, which already runs on Windows on the x86 family, can be modified and re-built to run on Linux on the x86 (and potentially other architectures) as well. The multiple versions of the code may be stored as separate codebases, or merged into one codebase. An alternative to porting iscross-platform virtualization, where applications compiled for one platform can run on another without modification of the source code or binaries. As an example, Apple'sRosetta, which is built intoIntel-based Macintosh computers, runs applications compiled for the previous generation of Macs that used PowerPC CPUs. Another example is IBMPowerVM Lx86, which allows Linux/x86 applications to run unmodified on the Linux/Power OS. Example of cross-platform binary software: A script can be considered to be cross-platform if itsinterpreteris available on multiple platforms and the script only uses the facilities built into the language. For example, a script written inPythonfor aUnix-likesystem will likely run with little or no modification on Windows, because Python also runs on Windows; indeed there are many implementations (e.g.IronPythonfor.NET Framework). The same goes for many of theopen-sourcescripting languages. Unlike binary executable files, the same script can be used on all computers that have software to interpret the script. This is because the script is generally stored inplain textin atext file. There may be some trivial issues, such as the representation of anew line character. Some popular cross-platform scripting languages are: Cross-platform or multi-platform is a term that can also apply tovideo gamesreleased on a range ofvideo game consoles. Examples of cross-platform games include:Miner 2049er,Tomb Raider: Legend,FIFA series,NHL seriesandMinecraft. Each has been released across a variety of gaming platforms, such as theWii,PlayStation 3,Xbox 360,personal computers, andmobile devices. Some platforms are harder to write for than others, requiring more time to develop the video game to the same standard. To offset this, a video game may be released on a few platforms first, then later on others. Typically, this happens when a new gaming system is released, becausevideo game developersneed to acquaint themselves with its hardware and software. Some games may not be cross-platform because of licensing agreements between developers and video game console manufacturers that limit development to one particular console. As an example,Disneycould create a game with the intention of release on the latestNintendoandSonygame consoles. Should Disney license the game with Sony first, it may be required to release the game solely on Sony's console for a short timeor indefinitely. Several developers have implemented ways to play games online while using different platforms.Psyonix,Epic Games,Microsoft, andValveall possess technology that allows Xbox 360 and PlayStation 3 gamers to play with PC gamers, leaving the decision of which platform to use to consumers. The first game to allow this level of interactivity between PC and console games (Dreamcast with specially produced keyboard and mouse) wasQuake 3.[11][12] Games that feature cross-platformonline playincludeRocket League,Final Fantasy XIV,Street Fighter V,Killer Instinct,ParagonandFable Fortune,andMinecraftwith its Better Together update onWindows 10, VR editions,Pocket EditionandXbox One. Cross-platform programming is the practice of deliberately writing software to work on more than one platform. There are different ways to write a cross-platform application. One approach is to create multiple versions of the same software in differentsource trees—in other words, the Microsoft Windows version of an application might have one set of source code files and theMacintoshversion another, while aFOSS*nixsystem might have a third. While this is straightforward, compared to developing for only one platform it can cost much more to pay a larger team or release products more slowly. It can also result in more bugs to be tracked and fixed. Another approach is to use software that hides the differences between the platforms. Thisabstraction layerinsulates the application from the platform. Such applications areplatform agnostic. Applications that run on the JVM are built this way. Some applications mix various methods of cross-platform programming to create the final application. An example is the Firefox web browser, which uses abstraction to build some of the lower-level components, with separate source subtrees for implementing platform-specific features (like the GUI), and the implementation of more than one scripting language to easesoftware portability. Firefox implementsXUL,CSSand JavaScript for extending the browser, in addition to classicNetscape-style browser plugins. Much of the browser itself is written in XUL, CSS, and JavaScript. There are many tools[13][14]available to help the process of cross-platform programming: There are many challenges when developing cross-platform software:
https://en.wikipedia.org/wiki/Cross-platform_software
Write once, compile anywhere(WOCA) is aphilosophytaken by acompilerand its associatedsoftware librariesor by a software library/software frameworkwhich refers to a capability of writing acomputer programthat can be compiled on allplatformswithout the need to modify itssource code. As opposed to Sun'swrite once, run anywhereslogan,cross-platformcompatibility is implemented only at the source code level, rather than also at the compiledbinary codelevel. There are many languages that aim to allow developers to follow the WOCA philosophy, such asC++,Pascal(seeFree Pascal),Ada,Cobol, orC, on condition that they don't use functions beyond those provided by thestandard library. Languages likeGogo even further in as far that no system specific things are used, it should just work, and for system-specific elements a system of platform-specific files is used. A computer program may also use cross-platform libraries, which provide anabstraction layerhiding the differences between various platforms, for things likesocketsandGUI, ensuring the portability of the written source code. This is, for example, supported byQt(C++) or theLazarus(Pascal) IDE via itsLCLand correspondingwidgetsets. Today, we have very powerful desktop computers as well as computers in our phones, which often have sophisticated applications such asword processing,Database management, andspreadsheets, that can allow people with no programming experience to, sort, extract, and manipulate their data. and create documents (such asPDFfiles) showing their now organized information, or printing it out. Before 2000, some of these were not available, and prior to 1980, almost none of them were. From the start of computer automation in the early 1960s, if you wanted a report from data you had, or needed to print upinvoices, payroll checks,purchase orders, and other paperwork businesses, schools and governments generated, you typed them up on a physical typewriter, possibly using pre-printed forms. Otherwise, if you did have information stored in a computer, and wanted it sorted, manipulated, or printed, it required someone to write a program to do so. In some cases, the person needed information that professional programmers either could not understand how to provide a program to do what they wanted; the available programmers could not produce something in a reasonable amount of time; or there weren't any programmers they could use, caused some non-programming professionals to learn some programming skills, at least to know how to manipulate and print out information they needed from their data. Whether the work was done by a professional programmer, or anend-userwriting a program to provide them information for their own use, the means to do this in either case is the same. Write a program, submit it to acompiler(another program that converts written programs into instructions the computer can understand), fix any errors, then repeat until the program worked. While this helped fix part of the problem, it created a new one. People who wrote programs, or hired someone to write them (purchasing software was not a thing until the 1970s or 1980s), discovered when their employer or school bought a new computer, their programs no longer worked. To combat these problems, varioushigh-level languageswere developed that were usable for general purpose application program development, but could be used to provide reports and information for people with specialized requirements. These include: Whilecompilersandinterpretersof all of these languages, and dozens of others, were available for different machines and different vendors, often each manufacturer would develop proprietary enhancements which made programing on that machine easier, but again, made programs difficult to port (move the program to a different type of computer or a different vendor's computers), and increased vendor lock in. Something had to change. Starting in the late 1960s and early 1970s, efforts came into play to create standards and specifications of how machine-independent programs could be written using compilers from any vendor. Standards-making organizations, like theInternational Standards Organization(ISO), andANSI, among others, in cooperation with large users of computers and software (like governments, financial institutions and manufacturers), and computer manufacturers, to create standardized specifications to provide a description of how each specific language should be implemented. Computer manufacturers could still have their own proprietary extensions to a programming language, but if they wanted to be able to claim compliance with the standard, they had to specify the differences in the reference manual, so that a program written according to the standard able to compile and operate on their machine would also operate, without further change, on a different manufacturer's computer whose compiler also followed the standard. The requirements of the standard were enforced by large software buyers, such as military, government, and manufacturing companies, by refusing to buy such computing equipment if the vendor only offered a compiler for the programming languages they used which wasn't compliant with the standard. Currently, there are more than a dozen programming languages that have standards describing how programs in the language are supposed to be written, includingAda,APL,BASIC,C++,COBOL,ECMAScript(the generic name forJavaScript),Forth,FORTRAN,Pascal,PL/I,RubyandSQL. Many of these are still in use, in some cases, because customers were able to take their source code to a different manufacturer's computer, where it was recompiled, often without change, because of the standardization of programming languages. While the standards helped, the WOCA philosophy works only when the makers of compilers ensure that they follow the standard.
https://en.wikipedia.org/wiki/Write_once,_compile_anywhere
Withinsystems engineering,quality attributesare realizednon-functional requirementsused to evaluate the performance of a system. These are sometimes named architecture characteristics, or "ilities" after thesuffixmany of the words share. They are usuallyarchitecturally significant requirementsthat require architects' attention.[1] Insoftware architecture, these attributed are known as "architectural characteristic" ornon-functional requirements. Note that it'ssoftware architects' responsibility to match these attributes withbusiness requirementsand user requirements. Note that synchronous communication between software architectural components, entangles them and they must share the same architectural characteristics.[2] Notable quality attributes include: Many of these quality attributes can also be applied todata quality.
https://en.wikipedia.org/wiki/List_of_system_quality_attributes
Inprogrammingandsoftware design, abindingis anapplication programming interface(API) that providesglue codespecifically made to allow aprogramming languageto use a foreignlibraryoroperating systemservice (one that is not native to that language). Binding generally refers to a mapping of one thing to another. In the context ofsoftware libraries, bindings arewrapper librariesthat bridge twoprogramming languages, so that alibrarywritten for one language can be used in another language.[1]Many software libraries are written insystem programming languagessuch asCorC++. To use such libraries from another language, usually ofhigher-level, such asJava,Common Lisp,Scheme,Python, orLua, a binding to the library must be created in that language, possibly requiringrecompilingthe language's code, depending on the amount of modification needed.[2]However, most languages offer aforeign function interface, such as Python's andOCaml'sctypes, andEmbeddable Common Lisp'scffianduffi.[3][4][5] For example,Pythonbindings are used when an extant C library, written for some purpose, is to be used from Python. Another example islibsvnwhich is written in C to provide an API to access theSubversionsoftware repository. To access Subversion from within Java code,libsvnjavahlcan be used, which depends onlibsvnbeing installed and acts as a bridge between the language Java andlibsvn, thus providing an API that invokes functions fromlibsvnto do the work.[6] Major motives to create library bindings includesoftware reuse, to reduce reimplementing a library in several languages, and the difficulty of implementing somealgorithmsefficiently in some high-level languages. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Language_binding
Asource-to-source translator,source-to-source compiler(S2S compiler),transcompiler, ortranspiler[1][2][3]is a type oftranslatorthat takes thesource codeof a program written in aprogramming languageas its input and produces an equivalent source code in the same or a different programming language, usually as anintermediate representation. A source-to-source translator converts between programming languages that operate at approximately the same level ofabstraction, while a traditionalcompilertranslates from ahigher level languageto alower level language. For example, a source-to-source translator may perform a translation of a program fromPythontoJavaScript, while a traditional compiler translates from a language likeCtoassemblyorJavatobytecode.[4]Anautomatic parallelizingcompiler will frequently take in a high level language program as an input and then transform the code and annotate it with parallel code annotations (e.g.,OpenMP) or language constructs (e.g.Fortran'sforallstatements).[2][5] Another purpose of source-to-source-compiling is translating legacy code to use the next version of the underlying programming language or an application programming interface (API) that breaks backward compatibility. It will perform automaticcode refactoringwhich is useful when the programs to refactor are outside the control of the original implementer (for example, converting programs from Python 2 to Python 3, or converting programs from an old API to the new API) or when the size of the program makes it impractical or time-consuming to refactor it by hand. Transcompilers may either keep translated code structure as close to the source code as possible to ease development anddebuggingof the original source code or may change the structure of the original code so much that the translated code does not look like the source code.[6]There are also debugging utilities that map the transcompiled source code back to the original code; for example, theJavaScriptSource Map standard[citation needed]allows mapping of the JavaScript code executed by aweb browserback to the original source when the JavaScript code was, for example, minified or produced by a transcompiled-to-JavaScript language.[citation needed] Examples includeClosure Compiler,CoffeeScript,Dart,Haxe,Opal,TypeScriptandEmscripten.[7] So calledAssembly language translatorsare a class of source-to-source translators converting code from oneassembly languageinto another, including (but not limited to) across different processor families andsystem platforms. Intelmarketed their 16-bit processor8086to besource compatibleto the8080, an 8-bit processor.[8]To support this, Intel had anISIS-II-based translator from 8080 to 8086 source code named CONV86[9][10][11][12](also referred to as CONV-86[13]and CONVERT 86[14][15]) available toOEMcustomers since 1978, possibly the earliest program of this kind.[nb 1]It supported multiple levels of translation and ran at 2 MHz on an Intel Microprocessor Development SystemMDS-800with 8-inchfloppy drives. According to user reports, it did not work very reliably.[16][17] Seattle Computer Products(SCP) offered TRANS86.COM,[15][18][19]written byTim Patersonin 1980 while developing86-DOS.[20][21][22]The utility could translate Intel 8080 andZilogZ80assembly source code (with Zilog/Mostekmnemonics) into .ASM source code for the Intel 8086 (in a format only compatible with SCP'scross-assemblerASM86 forCP/M-80), but supported only a subset ofopcodes, registers and modes, and often still required significant manual correction and rework afterwards.[23][20]Also, performing only a meretransliteration,[14][18][9][10]the brute-forcesingle-pass translatordid not carry out any register and jump optimizations.[24][25]It took about 24 KB of RAM.[15]The SCP version 1 of TRANS86.COM ran on Z80-based systems.[15][18]Once 86-DOS was running, Paterson, in aself-hosting-inspired approach, utilized TRANS86 to convert itself into a program running under 86-DOS.[22][18]Numbered version 2, this was named TRANS.COM instead.[18][25][24][26][27]Later in 1982, the translator was apparently also available fromMicrosoft.[15][28] Also named TRANS86,Sorcimoffered an 8080 to 8086 translator as well since December 1980.[29][14]Like SCP's program it was designed to port CP/M-80 application code (in ASM, MAC, RMAC or ACT80 assembly format) toMS-DOS(in a format compatible with ACT86).[29][15][30][31]In ACT80 format it also supported a few Z80 mnemonics. The translation occurred on an instruction-by-instruction basis with some optimization applied to conditional jumps. The program ran under CP/M-80,MP/M-80andCromemco DOSwith a minimum of 24 KB of RAM, and had no restrictions on the source file size.[15][32] Much more sophisticated and the first to introduceoptimizing compilertechnologies into the source translation process wasDigital Research's XLT86 1.0 in September 1981. XLT86 1.1 was available by April 1982.[33]The program was written byGary Kildall[14][34][35][36]and translated .ASM source code for the Intel 8080 processor (in a format compatible with ASM, MAC or RMAC assemblers) into.A86source code for the 8086 (compatible with ASM86). Usingglobal data flow analysison 8080 register usage,[37][14][38][39]the five-phasemulti-passtranslator would also optimize the output for code size and take care of calling conventions (CP/M-80BDOScalls were mapped into BDOS calls forCP/M-86), so that CP/M-80 and MP/M-80 programs could be ported to the CP/M-86 andMP/M-86platforms automatically. XLT86.COM itself was written inPL/I-80for CP/M-80 platforms.[40][15][33][41]The program occupied 30 KB of RAM for itself plus additional memory for theprogram graph. On a 64 KB memory system, the maximum source file size supported was about 6 KB,[40][15][42][33]so that larger files had to be broken down accordingly before translation.[15][33]Alternatively, XLT86 was also available forDECVAX/VMS.[15][33]Although XLT86's input and output worked on source-code level, the translator's in-memory representation of the program and the applied code optimizing technologies set the foundation tobinary recompilation.[43][44][45] 2500 AD Software offered an 8080 to 8086 source-code translator as part of their XASM suite for CP/M-80 machines with Z80 as well as forZilog ZEUSandOlivetti PCOSsystems.[46] Since 1979, Zilog offered a Z80 toZ8000translator as part of their PDS 8000 development system.[47][48][49][50][51][17]Advanced Micro Computers(AMC)[51][17]and 2500 AD Software offered Z80 to Z8000 translators as well.[46]The latter was named TRANS[52][53]and was available for Z80 CP/M, CP/M-86, MS-DOS and PCOS.[46] The Z88DK development kit provides a Z80 toi486source code translator targetingnasmnamed "to86.awk", written in 2008 by Stefano Bodrato.[54]It is in turn based on an 8080 to Z80 converter written in 2003 by Douglas Beattie, Jr., named "toz80.awk".[54] In 2021, Brian Callahan wrote an 8080 CP/M 2.2 to MS-DOS source code translator targetingnasmnamed 8088ify.[55] The first implementations of some programming languages started as transcompilers, and the default implementation for some of those languages are still transcompilers. In addition to the table below, aCoffeeScriptmaintainer provides a list of languages that compile to JavaScript.[56] When developers want to switch to a different language while retaining most of an existing codebase, it might be better to use a transcompiler compared to rewriting the whole software by hand. Depending on the quality of the transcompiler, the code may or may not need manual intervention in order to work properly. This is different from "transcompiled languages" where the specifications demand that the output source code always works without modification. All transcompilers used toporta codebase will expect manual adjustment of the output source code if there is a need to achieve maximum code quality in terms of readability and platform convention. A transcompiler pipeline is what results fromrecursive transcompiling. By stringing together multiple layers of tech, with a transcompile step between each layer, technology can be repeatedly transformed, effectively creating a distributedlanguage independent specification. XSLTis a general-purpose transform tool that can be used between many different technologies, to create such aderivative codepipeline.[72] Recursive transcompilation(orrecursive transpiling) is the process of applying the notion of transcompiling recursively, to create a pipeline of transformations (often starting from asingle source of truth) which repeatedly turn one technology into another. By repeating this process, one can turn A → B → C → D → E → F and then back into A(v2). Some information will be preserved through this pipeline, from A → A(v2), and that information (at an abstract level) demonstrates what each of the components A–F agree on. In each of the different versions that the transcompiler pipeline produces, that information is preserved. It might take on many different shapes and sizes, but by the time it comes back to A (v2), having been transcompiled six times in the pipeline above, the information returns to its original state. This information which survives the transform through each format, from A–F–A(v2), is (by definition) derivative content orderivative code. Recursive transcompilation takes advantage of the fact that transcompilers may either keep translated code as close to the source code as possible to ease development anddebuggingof the original source code, or else they may change the structure of the original code so much, that the translated code does not look like the source code. There are also debugging utilities that map the transcompiled source code back to the original code; for example,JavaScriptsource maps allow mapping of the JavaScript code executed by aweb browserback to the original source in a transcompiled-to-JavaScript language.
https://en.wikipedia.org/wiki/Source-to-source_compiler
Avideo game console emulatoris a type ofemulatorthat allows a computing device[fn 1]to emulate avideo game console's hardware and play its games on the emulating platform. More often than not, emulators carry additional features that surpass limitations of the original hardware, such as broader controller compatibility,timescalecontrol (such as fast-forwarding and rewinding), easier access to memory modifications (likeGameShark),[1]and unlocking of gameplay features.[citation needed]Emulators are also a useful tool in the development process ofhomebrewdemosand the creation of new games for older, discontinued, or rare consoles.[citation needed] The code and data of a game are typically supplied to the emulator by means of aROM file(a copy of game cartridge data) or anISO image(a copy of optical media).[citation needed]While emulation softwares themselves are legal as long as they don't infringe copyright protections on the console,[2][3]emulating games is only so when legitimately purchasing the game physically andrippingthe contents. Freely downloading or uploading game ROMs across various internet sites is considered to be a form of piracy,[4]and users may be sued forcopyright infringement.[5][6] By the mid-1990s,personal computershad progressed to the point where it was technically feasible to replicate the behavior of some of the earliest consoles entirely through software, and the first unauthorized, non-commercial console emulators began to appear. These early programs were often incomplete, only partially emulating a given system, resulting indefects. Few manufacturers published technical specifications for their hardware, which left programmers to deduce the exact workings of a console throughreverse engineering.Nintendo's consoles tended to be the most commonly studied, for example the most advanced early emulators reproduced the workings of theNintendo Entertainment System, theSuper Nintendo Entertainment System, and theGame Boy. The first such recognized emulator was released around 1996, being one of the prototype projects that eventually merged into theSNES9Xproduct.[7]Programs like Marat Fayzullin's iNES, VirtualGameBoy, Pasofami (NES), Super Pasofami (SNES), and VSMC (SNES) were the most popular console emulators of this era. A curiosity was alsoYuji Naka's unreleased NES emulator for theGenesis, possibly marking the first instance of a software emulator running on a console.[8]Additionally, as theInternetgained wider availability, distribution of both emulator software and ROM images became more common, helping to popularize emulators.[7] Legal attention was drawn to emulations with the release ofUltraHLE, an emulator for theNintendo 64released in 1999 while the Nintendo 64 was still Nintendo's primary console – its next console, theGameCube, would not be released until 2001. UltraHLE was the first emulator to be released for a current console, and it was seen to have some effect on Nintendo 64 sales, though to what degree compared with diminishing sales on the aging consoles was not clear. Nintendo pursued legal action to stop the emulator project, and while the original authors ceased development, the project continued by others who had gotten the source code. Since then, Nintendo has generally taken the lead in actions against emulation projects or distributions of emulated games from their consoles compared to other console or arcade manufacturers.[7] This rise in popularity opened the door to foreign video games, and exposed North American gamers to Nintendo's censorship policies. This rapid growth in the development of emulators in turn fed the growth of theROM hackingandfan-translation. The release of projects such as RPGe'sEnglish languagetranslation ofFinal Fantasy Vdrew even more users into the emulation scene.[9]Additionally, the development of some emulators has contributed to improved resources forhomebrewsoftware development for certain consoles, such as was the case withVisualBoyAdvance, aGame Boy Advanceemulator that was noted by author Casey O'Donnell as having contributed to the development of tools for the console that were seen as superior to even those provided by Nintendo, so much so that even some licensed game developers used the tools to develop games for the console.[10] On April 17, 2024,Applebegan allowing emulators on the App Store,[11]lifting a ban that had lasted nearly 16 years. Following this decision, numerous emulators such as Delta, Sutāto, and RetroArch appeared on the store.[12][13][14] Emulators can be designed in three ways: purely operating in software which is the most common form such asMAMEusing ROM images; purely operating in hardware such as theColecoVision's adapter to acceptAtari VCScartridges.[7] An emulator is created typically throughreverse engineeringof the hardware information as to avoid any possible conflicts with non-public intellectual property. Some information may be made public for developers on the hardware's specifications which can be used to start efforts on emulation but there are often layers of information that remain as trade secrets such as encryption details. Operating code stored in the hardware'sBIOSmay bedisassembledto be analyzed in aclean room design, with one person performing the disassembling and another person, separately, documenting the function of the code. Once enough information is obtained regarding how the hardware interprets the game software, an emulation on the target hardware can then be constructed.[7]Emulation developers typically avoid any information that may come from untraceable sources to avoid contaminating the clean room nature of their project. For example, in 2020, alarge trove of information related to Nintendo's consoles was leaked, and teams working on Nintendo console emulators such as theDolphinemulator for GameCube and Wii stated they were staying far away from the leaked information to avoid tainting their project.[15] Once an emulator is written, it then requires a copy of the game software to be obtained, a step that may have legal consequences. Typically, this requires the user to make a copy of the contents of the ROM cartridge to computer files or images that can be read by the emulator, a process known as "dumping" the contents of the ROM. A similar concept applies to other proprietary formats, such as forPlayStationCD games. While not required for emulation of the earliest arcade or home console, most emulators also require a dump of the hardware's BIOS, which could vary with distribution region and hardware revisions. In some cases, emulators allow for the application of ROMpatcheswhich update the ROM or BIOS dump to fix incompatibilities with newer platforms or change aspects of the game itself. The emulator subsequently uses the BIOS dump to mimic the hardware while the ROM dump (with any patches) is used to replicate the game software.[7] ROM files and ISO files are created by either specialized tools for game cartridges, or regular optical drives reading the data.[16]As an alternative, specialized adapters such as theRetrodeallow emulators to directly access the data on game cartridges without needing to copy it into a ROM image first. Outside of official usage, emulation has generally been seen negatively by video game console manufacturers and game developers. The largest concern is nature ofcopyright infringementrelated to ROM images of games, typically distributed freely and without hardware restrictions. While this directly impacts potential sales of emulated games and thus the publishers and developers, the nature of thevalue chainof the industry can lead to potential financial harm to console makers.[7]Further, emulation challenges the industry's use of therazorblade modelfor console games, where consoles are sold near cost and revenue instead obtained from licenses on game sales. With console emulation being developed even while consoles are still on the market, console manufacturers are forced to continue to innovate, bring more games for their systems to market, and move quickly onto new technology to continue their business model.[7]There are further concerns related to intellectual property of the console's branding and of games' assets that could be misused, though these are issues less with emulation itself but with how the software is subsequently used.[7] Alternatively, emulation is seen to enhancevideo game preservationefforts, both in shifting game information from outdated technology into newer, more persistent formats, and providing software or hardware alternates to aged hardware.[17]Concerns about cost, availability, and longevity of game software and console hardware have also been cited as a reason for supporting the development of emulators.[17]Some users of emulation also see emulation as means to preserve games from companies that have long-since gone bankrupt or disappeared from the industry's earlier market crash and contractions, and where ownership of the property is unclear. Emulation can also be seen as a means to enhance functionality of the original game that would otherwise not be possible, such as adding in localizations via ROM patches or new features such assave states.[7]In November 2021,Phil Spencerstated that he hoped for video game companies to eventually develop and propagate legal emulation which would allow users to play any game from the past that they already owned a copy of, characterizing it as "a great North Star" for the industry to aim towards in the future.[18][19] As computers andglobal computer networkscontinued to advance and become more popular, emulator developers grew more skilled in their work, the length of time between the commercial release of a console and its successful emulation began to shrink.Fifth generationconsoles such asNintendo 64,PlayStationandsixth generationhandhelds, such as theGame Boy Advance, saw significant progress toward emulation during their production. This led to an effort by console manufacturers to stop unofficial emulation, but consistent failures such asSega v. Accolade977 F.2d 1510 (9th Cir. 1992),Sony Computer Entertainment, Inc. v. Connectix Corporation203 F.3d 596 (2000), andSony Computer Entertainment America v. Bleem214 F.3d 1022 (2000),[20]have had the opposite effect, which has ruled that emulators, developed through clean room design, are legal. TheLibrarian of Congress, under theDigital Millennium Copyright Act(DMCA), has codified these rules as allowed exemptions to bypass technical copyright protections on console hardware.[7]However, emulator developers cannot incorporate code that may have been embedded within the hardware BIOS, nor ship the BIOS image with their emulators.[7] Unauthorized distribution of copyrighted code remains illegal, according to both country-specificcopyrightand international copyright law under theBerne Convention.[21][better source needed]Accordingly, video game publishers and developers have taken legal action against websites that illegally redistribute their copyrighted software, successfully forcing sites to remove their titles[22]or taking down the websites entirely.[23] Under United States law, obtaining adumpedcopy of the original machine'sBIOSis legal under the rulingLewis Galoob Toys, Inc. v. Nintendo of America, Inc., 964 F.2d 965 (9th Cir. 1992) asfair useas long as the user obtained a legally purchased copy of the machine. To mitigate this however, several emulators for platforms such asGame Boy Advanceare capable of running without a BIOS file, usinghigh-level emulationto simulate BIOS subroutines at a slight cost in emulation accuracy.[citation needed] Newer consoles have introduced one or more layers ofencryptionto make emulation more difficult from a technical perspective but also can create further legal challenges under the DMCA, which forbids the distribution of tools and information on how to bypass these layers. TheNintendo SwitchemulatorYuzuhad been sued by Nintendo because the group behind the emulator had provided such information on how to obtain the required decryption keys, leading the group to settle with Nintendo and removing the emulator from distribution. Forked projects from Yuzu since appeared, taking the route of informing users what decryption items they would need but otherwise not stating how to acquire these as to stay within Nintendo's stance against emulation and copyright infringement.[24] Due to their popularity, emulators have also been a target of online scams in the form oftrojan horseprograms designed to mimic the appearance of a legitimate emulator, which are then promoted throughspam, onYouTubeand elsewhere.[25]Some scams, such as the purported "PCSX4" emulator, have even gone so far as to setting up a fakeGitHubrepository, presumably for added trustworthiness especially to those unfamiliar withopen-source softwaredevelopment.[26]TheFederal Trade Commissionhas since issued an advisory warning users to avoid downloading such software, in response to reports of a purportedNintendo Switchemulator released by various websites as a front for a survey scam.[27] Due to the high demand of playing old games on modern systems, consoles have begun incorporating emulation technology. The most notable of these isNintendo'sVirtual Console. Originally released for theWii, but present on the3DSandWii U,Virtual Consoleuses software emulation to allow the purchasing and playing of games for old systems on this modern hardware. Though not all games are available, the Virtual Console has a large collection of games spanning a wide variety of consoles. The Virtual Console's library of past games currently consists of titles originating from theNintendo Entertainment System,Super NES,Game Boy,Game Boy Color,Nintendo 64,Game Boy Advance,Nintendo DS, and Wii, as well asSega'sMaster SystemandGenesis/Mega Drive,NEC'sTurboGrafx-16, andSNK'sNeo Geo. The service for the Wii also includes games for platforms that were known only in select regions, such as theCommodore 64(Europe and North America) andMSX(Japan),[28]as well as Virtual Console Arcade, which allows players to download videoarcade games. Virtual Console titles have been downloaded over ten million times.[29]Each game is distributed with a dedicated emulator tweaked to run the game as well as possible. However, it lacks the enhancements that unofficial emulators provide, and many titles are still unavailable.[which?] Until the 4.0.0 firmware update, theNintendo Switchsystem softwarecontained an embedded NES emulator, referred to internally as "flog", running the gameGolf(withmotion controllersupport usingJoy-Con). TheEaster eggwas believed to be a tribute to former Nintendo presidentSatoru Iwata, who died in 2015: the game was only accessible on July 11 (the date of his death),Golfwas programmed by Iwata, and the game was activated by performing a motion gesture with a pair of Joy-Con that Iwata had famously used during Nintendo's video presentations. It was suggested that the inclusion ofGolfwas intended as a digital form ofomamori—a traditional form of Japaneseamuletsintended to provide luck or protection.[30][31][32]As part of itsNintendo Switch Onlinesubscription service, Nintendo has subsequently released apps featuring regularly updated on-demand libraries of titles from older systems, under the nameNintendo Classics.[33]The apps include similar features to Virtual Console titles, including save states, as well as a pixel scaler mode and an effect that simulatesCRT televisiondisplays.[34] Due to differences in hardware, theXbox 360is not natively backwards compatible with originalXboxgames.[fn 2]However, Microsoft achievedbackwards compatibility with popular titlesthrough an emulator. On June 15, 2015, Microsoft announced the Xbox One would be backwards compatible with Xbox 360 through emulation. In June 2017, they announced original Xbox titles would also be available for backwards compatibility through emulation, but because the Xbox original runs on thex86architecture, CPU emulation is unnecessary, greatly improving performance. ThePlayStation 3uses software emulation to play original PlayStation titles, and the PlayStation Store sells games that run through an emulator within the machine. In the original Japanese and North American 60 GB and 20 GB models, original PS2 hardware is present to run titles; however all PAL models, and later models released in Japan and North America removed some PS2 hardware components, replacing it with software emulation working alongside the video hardware to achieve partial hardware/software emulation.[35][36]In later releases, backwards compatibility with PS2 titles was completely removed along with the PS2 graphics chip, and eventually Sony released PS2 titles with software emulation on thePlayStation Store.[36] Commercial developers have also used emulation as a means to repackage and reissue older games on newer consoles in retail releases. For example, Sega has created several collections ofSonic the Hedgehoggames. Before theVirtual Console, Nintendo also used this tactic, such asGame Boy Advancere-releases ofNEStitles in theClassic NES Series.[37] Although the primary purpose of emulation is to make older video-games execute on newer systems, there are several advantages inherent in the extra flexibility of software emulation that were not possible on the original systems. Disk imageloading is a necessity for most console emulators, as most computing devices do not have the hardware required to run older console games directly from the physical game media itself. Even with optical media system emulators such as the PlayStation and PlayStation 2, attempting to run games from the actual disc may cause problems such as hangs and malfunction as PC optical drives are not designed to spin discs the way those consoles do.[citation needed]This, however, has led to the advantage of it being far easier to modify the actual game's files contained within the game ROMs. Amateurprogrammersand gaming enthusiasts have producedtranslationsof foreign games, rewritten dialogue within a game, applied fixes tobugsthat were present in the original game, as well as updating old sports games with modern rosters. It is even possible to use high-resolution texture pack upgrades for 3-D games and sometimes 2-D if available and possible.[fn 3] Software that emulates a console can be improved with additional capabilities that the original system did not have. These include Enhanced graphical capabilities, such asspatial anti-aliasing, upscaling of theframebufferresolution to match high definition and even higher display resolutions, as well asanisotropic filtering(texture sharpening). Emulation software may offer improved audio capabilities (e.g. decreased latency and better audio interpolation), enhancedsave states(which allow the user to save a game at any point for debugging or re-try) and decreased boot and loading times. Some emulators feature an option to "quickly" boot a game, bypassing the console manufacturer's original splash screens. Furthermore, emulation software may offer onlinemultiplayerfunctionality and the ability to speed up and slow down the emulation speed. This allows the user to fast-forward through unwanted cutscenes for example, or the ability to disable the framelimiter entirely (useful for benchmarking purposes). Some consoles have a regional lockout, preventing the user from being able to play games outside of the designated game region. This can be considered a nuisance for console gamers as some games feature seemingly inexplicable localization differences between regions, such as differences in the time requirements for driving missions and license tests onGran Turismo 4,[38][39][better source needed]and the PAL version ofFinal Fantasy Xwhich added more ingame skills, changes to some bosses, and even more bosses, Dark Aeons,[40]that weren't available in the North American NTSC release of the game.[41] Although it is usually possible to modify the consoles themselves to bypass regional lockouts, console modifications can cause problems with screens not being displayed correctly and games running too fast or slow, due to the fact that the console itself may not be designed to output to the correct format for the game. These problems can be overcome on emulators, as they are usually designed with their own output modules, which can run both NTSC and PAL games without issue.[citation needed] Many emulators, for exampleSnes9x,[42]make it far easier to load console-based cheats, without requiring potentially expensive proprietary hardware devices such as those used by GameShark andAction Replay. Freeware tools allow codes given by such programs to be converted into code that can be read directly by the emulator's built-in cheating system, and even allow cheats to be toggled from the menu. The debugging tools featured in many emulators also aid gamers in creating their own such cheats. Similar systems can also be used to enable Widescreen Hacks for certain games, allowing the user to play games which were not originally intended for widescreen, without having to worry about aspect ratio distortion on widescreen monitors.
https://en.wikipedia.org/wiki/Console_emulator
Asource portis a software project based on thesource codeof agame enginethat allows the game to be played onoperating systemsorcomputing platformswith which the game was not originally compatible. Source ports are oftencreated by fansafter the original developer hands over the maintenance support for a game by releasing itssource codeto the public (seeList of commercial video games with later released source code). In some cases, the source code used to create a source port must be obtained throughreverse engineering, in situations where the original source was never formally released by the game's developers. The term was coined after the release of the source code toDoom. Due to copyright issues concerning the sound library used by the original DOS version, id Software released only the source code to the Linux version of the game.[1][2]Since the majority of Doom players were DOS users the first step for a fan project was toportthe Linuxsourcecode to DOS.[3]A source port typically only includes the engine portion of the game and requires that the data files of the game in question already be present on users' systems. Source ports share the similarity withunofficial patchesthat both don't change the original gameplay as such projects are by definitionmods. However many source ports add support for gameplay mods, which is usually optional (e.g.DarkPlacesconsists of a source port engine and a gameplay mod that are even distributed separately[4]). While the primary goal of any source port is compatibility with newer hardware, many projects support other enhancements. Common examples of additions include support for higher video resolutions and differentaspect ratios, hardware accelerated renderers (OpenGLand/orDirect3D), enhanced input support (including the ability to map controls onto additional input devices), 3D character models (in case of2.5Dgames), higher resolution textures, support to replaceMIDIwithdigital audio(MP3,Ogg Vorbis, etc.), and enhancedmultiplayersupport using theInternet. Several source ports have been created for various games specifically to address online multiplayer support. Most older games were not created to take advantage of the Internet and the low latency, high bandwidth Internet connections available to computer gamers today. Furthermore, old games may use outdated network protocols to create multiplayer connections, such asIPXprotocol, instead ofInternet Protocol. Another problem was games that required a specificIP addressfor connecting with another player. This requirement made it difficult to quickly find a group of strangers to play with — the way that online games are most commonly played today. To address this shortcoming, specific source ports such asSkulltagadded "lobbies", which are basically integratedchat roomsin which players can meet and post the location of games they are hosting or may wish to join. Similar facilities may be found in newer games and online game services such as Valve'sSteam, Blizzard'sbattle.net, andGameSpy Arcade. If the source code of a software is not available, alternative approaches to achieve portability areEmulation,Engine remakes, andStatic recompilation.
https://en.wikipedia.org/wiki/Source_port
Poshis asoftwareframeworkused incross-platformsoftwaredevelopment. It was created by Brian Hook.[1]It isBSD licensedand as of 17 March 2014[update], atversion1.3.002. The Posh software framework provides aheader fileand an optionalCsource file. Posh does not provide alternatives where a hostplatformdoes not offer a feature, but informs throughpreprocessormacroswhat is supported and what is not. It sets macros to assist in compiling with variouscompilers(such asGCC,MSVCandOpenWatcom), and different hostendiannesses. In its simplest form, only a single header file is required. In the optional C source file, there arefunctionsfor byte swapping and in-memoryserialisation/deserialisation. Brian Hook also createdSAL(Simple Audio Library) that utilises Posh. Both are featured in his book "Write Portable Code". Posh is also used inFerretandVega Strike. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Poshlib
Japan Vulnerability Notes(JVN) isJapan's nationalvulnerability database. It is maintained by theJapan Computer Emergency Response Team Coordination Centerand the Japanese government'sInformation-Technology Promotion Agency.[1][2] This article related to government in Japan is astub. You can help Wikipedia byexpanding it. Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Japan_Vulnerability_Notes
TheNational Vulnerability Database(NVD) is the U.S. government repository of standards-based vulnerability management data represented using theSecurity Content Automation Protocol(SCAP). This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics. NVD supports theInformation Security Automation Program(ISAP). NVD is managed by the U.S. government agency theNational Institute of Standards and Technology(NIST). On Friday March 8, 2013, the database was taken offline after it was discovered that the system used to run multiple government sites had been compromised by a software vulnerability ofAdobe ColdFusion.[1][2] In June 2017, threat intel firmRecorded Futurerevealed that the median lag between a CVE being revealed to ultimately being published to the NVD is 7 days and that 75% of vulnerabilities are published unofficially before making it to the NVD, giving attackers time to exploit the vulnerability.[3] In addition to providing a list ofCommon Vulnerabilities and Exposures(CVEs), the NVD scores vulnerabilities using theCommon Vulnerability Scoring System (CVSS)[4]which is based on a set of equations using metrics such as access complexity and availability of a remedy.[5] In August 2023, the NVD initially marked an integer overflow bug in old versions ofcURLas a 9.8 out of 10 critical vulnerability. cURL lead developerDaniel Stenbergresponded by saying this was not a security problem, the bug had been patched nearly 4 years prior, requested the CVE be rejected, and accused NVD of "scaremongering" and "grossly inflating the severity level of issues".[6]MITRE disagreed with Stenberg and denied his request to reject the CVE, noting that "there is a valid weakness ... which can lead to a valid security impact."[7]In September 2023, the issue was rescored by the NVD as a 3.3 "low" vulnerability, stating that "it may (in theory) cause a denial of service" for attacked systems, but that this attack vector "is not especially plausible".[8] This United States government–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/National_Vulnerability_Database
TheOpen Sourced Vulnerability Database(OSVDB) was an independent and open-sourcedvulnerability database. The goal of the project was to provide accurate, detailed, current, and unbiased technical information onsecurityvulnerabilities.[1]The project promoted greater and more open collaboration between companies and individuals. The database's motto was "Everything is Vulnerable".[2] The core of OSVDB was a relational database which tied various information about security vulnerabilities into a common, cross-referencedopen securitydata source. As of December 2013, the database cataloged over 100,000 vulnerabilities.[3]While the database was maintained by a 501(c)(3) non-profit public organization and volunteers, the data was prohibited for commercial use without a license. Despite that, many large commercial companies used the data in violation of the license without contributing employee volunteer time or financial compensation.[4] The project was started in August 2002 at theBlackhatandDEF CONConferences by several industry notables (includingH. D. Moore, rain.forest.puppy, and others). Under mostly-new management, the database officially launched to the public on March 31, 2004.[5]The original implementation was written in PHP by Forrest Rae (FBR). Later, the entire site was re-written in Ruby on Rails by David Shettler. TheOpen Security Foundation(OSF) was created to ensure the project's continuing support. Jake Kouns (Zel), Chris Sullo, Kelly Todd (AKA Lyger), David Shettler (AKA D2D), and Brian Martin (AKA Jericho) were project leaders for the OSVDB project, and held leadership roles in the OSF at various times. On 5 April 2016, the database was shut down, while the blog was initially continued by Brian Martin.[6]The reason for the shut down was the ongoing commercial but uncompensated use by security companies.[7] As of January 2012, vulnerability entry was performed by full-time employees of Risk Based Security,[8]who provided the personnel to do the work in order to give back to the community. Every new entry included a full title, disclosure timeline, description, solution (if known), classification metadata, references, products, and researcher who discovered the vulnerability (creditee). Originally, vulnerability disclosures posted in various security lists and web sites were entered into the database as a new entry in the New Data Mangler (NDM) queue. The new entry contained only a title and links to the disclosure. At that stage the page for the new entry didn't contain any detailed description of the vulnerability or any associated metadata. As time permitted, new entries were analyzed and refined, by adding a description of the vulnerability as well as a solution if available. This general activity was called "data mangling" and someone who performed this task a "mangler". Mangling was done by core or casual volunteers. Details submitted by volunteers were reviewed by the core volunteers, called "moderators", further refining the entry or rejecting the volunteer changes if necessary. New information added to an entry that was approved was then available to anyone browsing the site. Some of the key people that volunteered and maintainedOSVDB: Other volunteers who have helped in the past include:[9]
https://en.wikipedia.org/wiki/Open_Source_Vulnerability_Database
Interleaved deltas, orSCCS weaveis a method used by theSource Code Control Systemto store all revisions of a file. All lines from all revisions are "woven" together in a single block of data, with interspersed control instructions indicating which lines are included in which revisions of the file. Interleaved deltas are traditionally implemented with line oriented text files in mind, although nothing prevents the method from being applied to binary files as well. Interleaved deltas were first implemented byMarc Rochkindin the SCCS in 1975. Its design makes all versions available at the same time, so that it takes the same time to retrieve any revision. It also contains sufficient information to identify the author of each line (blaming) in one block.[1]On the other hand, because all revisions for a file are parsed, every operation grows slower as more revisions are added. The terminterleaved deltawas coined later in 1982 byWalter F. Tichy, author of theRevision Control System, which compares the SCCS weave to his newreverse deltamechanism in RCS.[2] In SCCS, the following weave block represents a file that contains the lines "foo" and "bar" in the first release and the lines "bar" and "baz" in the second revision. The string "^A" denotes a control-A character. The control lines in the interleaved delta block have the following meaning:[3] The time it takes to extract any revision from such an interleaved delta block is proportional to the size of the archive. The size of the archive is the sum of the size of all different lines in all revisions. In order to extract a specific revision, an array of structures needs to be constructed, telling whether a specific block, tagged by a serial number in the interleaved deltas, will be copied to the output or not. The original SCCS implementation needs approx. 100 bytes of storage for each different serial number in the deltas in order to know how to extract a specific revision. A SCCS history file with one million deltas would thus need 100 MB of virtual memory to unpack. The size could be reduced by approx. 32 bytes per delta if no annotated file retrieval is needed. The advantages of the weave method are the following: Bazaarintended to use interleaved deltas in 2006,[5]but it was ditched due to poor performance after it was actually implemented in bzr 0.1. It still provides a weave-style merge algorithm.[6]
https://en.wikipedia.org/wiki/Interleaved_deltas
Source Code Control System(SCCS) is aversion control systemdesigned to track changes insource codeand other text files during the development of a piece of software. This allows the user to retrieve any of the previous versions of the original source code and the changes which are stored. It was originally developed atBell Labsbeginning in late 1972 byMarc Rochkindfor anIBM System/370computer runningOS/360.[1] A characteristic feature of SCCS is thesccsidstring that is embedded into source code, and automatically updated by SCCS for each revision.[2]This example illustrates its use in theC programming language: Thisstringcontains the file name, date, and can also contain a comment. After compilation, the string can be found in binary and object files by looking for the pattern@(#)and can be used to determine whichsource codefiles were used during compilation. Thewhatcommand is available to automate this search for version strings.[3] In 1972,Marc Rochkinddeveloped SCCS inSNOBOL4atBell Labsfor anIBM System/370computer runningOS/360MVT.[1]He rewrote SCCS in the C programming language for use underUNIX, then running on aPDP-11, in 1973. The first publicly released version was SCCS version 4 from February 18, 1977.[4]It was available with theProgrammer's Workbench(PWB) edition of theoperating system. Release 4 of SCCS was the first version that used a text-based history file format, earlier versions did use binary history file formats. Release 4 was no longer written or maintained by Marc Rochkind. Subsequently, SCCS was included inAT&T's commercialSystem IIIandSystem Vdistributions. It was not licensed with32V, the ancestor toBSD.[5]The SCCS command set is now part of theSingle UNIX Specification. SCCS was the dominant version control system for Unix until laterversion controlsystems, notably theRCSand laterCVS, gained more widespread adoption. Today, these early version control systems are generally considered obsolete, particularly in theopen-sourcecommunity, which has largely embraceddistributed version controlsystems. However, the SCCS file format is still used internally by a few newer version control programs, includingBitKeeperandTeamWare. The latter is a frontend to SCCS. Sablime[6]has been developed from a modified version of SCCS[7]but uses a history file format that is incompatible with SCCS. The SCCS file format uses a storage technique calledinterleaved deltas(or the weave[8]). This storage technique is now considered by manyversion controlsystem developers as foundational to advancedmergingand versioning techniques,[9]such as the "Precise Codeville" ("pcdv") merge. Apart from correctingYear 2000 problemsin 1999, no active development has taken place on the various UNIX vendor-specific SCCS versions.[10]In 2006,Sun Microsystems(today part ofOracle) released theirSolarisversion of SCCS asopen-sourceunder theCDDL licenseas part of their efforts to open-source Solaris.[11] The Source Code Control System (SCCS) is a system for controlling file and history changes. Software is typicallyupgradedto a new version by fixing bugs, optimizing algorithms and adding extra functions.[12]Changing software causes problems that require version control to solve.[1] SCCS was built to solve these problems. SCCS from AT&T had five major versions for the IBM OS and five major versions for UNIX[13]Two specific implementations using SCCS are: PDP 11 under Unix and IBM 370 under the OS.[1] SCCS consists of two parts: SCCS commands and SCCS files.[14]All basic operations (e.g., create, delete, edit) can be realized by SCCS commands.[14]SCCS files have a unique format prefixs., which is controlled by SCCS commands.[2] An SCCS file consists of three parts:[15] In SCCS, a delta is a single revision in an SCCS file. Deltas are stored in a delta table, so each SCCS file has its own record of changes.[15] Every operation of each SCCS file is tracked by flags. Their functions are as below:[15] SCCS uses three types of control records for keeping track of insertions and deletions applied in different deltas. They are the insertion control record, the deletion control record, and the end control record. Whenever a user changes some part of the text, a control record is inserted surrounding the change. The control records are stored in the body along with the original text records.[1] SCCS provides a set of commands in the form of macro invocations that perform or initiate source code management functions with a simple syntax, such as create, get, edit, prt.[16][17]It also provides access to the revision history of files under management. These commands are implemented as argument verbs to the driver programsccs. The sccs commandcreateuses the text of a source file to create a new history file. For example: The outputs are name, version and lines. The command is a macro that expands toadminto create the new history file followed bygetto retrieve the file. Edit a specific file. The command is a macro that expands toget -e. Check in new version and get the new version from sccs. The command is a macro that expands todeltato check in the new version file followed bygetto retrieve the file. The outputs are version and lines you want to get from specific file. This command produces a report of source code changes. MostUNIXversions include a version of SCCS, which, however, is often no longer actively developed.[18][better source needed] The lateJörg Schilling[de](who requested the release of SCCS in the early days of theOpenSolarisproject)[19]maintained aforkof SCCS[20][21]that is based on the OpenSolaris source code. It has received major feature enhancements but remains compatible with the original SCCS versions unless using the "new project" mode.[22] TheHeirloom Projectincludes a version of SCCS derived from theOpenSolarissource code[23]and maintained between December 2006 and April 2007.[24] GNUoffers the SCCS compatible programGNU CSSC("Compatibly Stupid Source Control"), which is occasionally used to convert SCCS archives to newer systems likeCVSorSubversion;[25]it is not a complete[26]SCCS implementation and not recommended for use in new projects, but mostly meant for converting to a modern version control system. Since the 1990s,many new version control systemshave been developed and become popular that are designed for managing projects with a large number of files and that offer advanced functionality such as multi-user operation, access control, automatic building, network support, release management anddistributed version control.BitkeeperandTeamWareuse the SCCS file format internally and can be considered successors to SCCS.[27][28] On BSD systems, the SCCSID is replaced by a RCSID starting and ending with$; the corresponding tool isident.[29]This system is originally used byRCSand added automatically on checkout. The resulting source code revision control identifiers are documented in theNetBSD[30]andFreeBSD[31]style guides for their own code bases. NetBSD defines the custom keyword$NetBSD: ...$while FreeBSD defines$FreeBSD: ...$and a macro renamed__FBSDID. The SRC version control system can also use the SCCS file format internally (orRCS's) and aims to provide a better user interface for SCCS while still managing only single-file projects.[32]
https://en.wikipedia.org/wiki/Source_Code_Control_System
In IBM terminology, aProgram temporary fixorProduct temporary fix(PTF), sometimes depending on date,[1][2]is a singlebug fix, or group of fixes, distributed in a form ready to install for customers. A PTF normally follows an APAR (Authorized Program Analysis Report[3]), and where an "APAR fix" was issued, the PTF "is a tested APAR"[4]or set of APAR fixes. However, if an APAR is resolved as "Fixed If Next" or "Permanent Restriction" then there may be no PTF fixing it, only a subsequent release. Initially, installations had to install service via a semi-manual process.[5] Over time, IBM started to provide service aids such as IMAPTFLE[6]and utilities such as IEBEDIT[7]to simplify the installation of batches of PTFs. ForOS/360 and successors, this culminated in System Modification Program (SMP) andSystem Modification Program/Extended(SMP/E). ForVM, this culminated in Virtual Machine Serviceability Enhancements Staged (VM/SP SES) and VMSES/E. ForDOS/360 and successors, this culminated in Maintain System History Program (MSHP) PTFs used to be distributed in a group on a so-calledProgram Update Tape(PUT) orRecommended Service Upgrade(RSU), approximately on a monthly basis. They can now be downloaded straight to the system through a direct connection to IBM support. In some instances IBM will release a "Cumulative PTF Pack", a large number of fixes which function best as a whole, and are sometimes codependent. When this happens, IBM issues compact discs containing the entire PTF pack, which can be loaded directly onto the system from its media drive. One reason for the use of physical media is size, and related (default) size limits. "By default, the /home file system on VIOS (Virtual I/O Server[8]) forSystem pis only 10GB in size."[9]If the "Cumulative PTF Pack" is larger than the default, "If you try (to) FTP 17GB of ISO images you will run out of space." Inz/OS, the PTFs are processed usingSMP/E(System Modification Program/Extended) in several stages over a course of weeks. Each PTF may includeHOLDDATA, in which case it is known as anexception SYSMOD. In rare cases an installation may install a single PTF, but normally it will install all available service except PTFs excluded by, e.g., aging policies, HOLDDATA. Details vary from installation to installation, but a simple service cycle would involve these steps: If the system is adversely affected by the service, asystem administratormay sometimes selectivelyRESTORE(un-apply) it and seek further support from IBM. However, if no problems are found after the service is applied, it can be permanently installed,ACCEPTed(committed) to the system. These repairs to IBM software are often in response to APARs submitted by customers and others and acted on by IBM, and are a common first step to resolving software errors. It is generally expected by the customer that the problem would be fully corrected in the next release (version) of the relevant product. At times[10]IBM software has a bug. Once IBM has ascertained that the cause is not one of IBM support staff, if they suspect that a defect in a current release of an IBM program is the cause, will file a formal report confirming the existence of an issue. This is referred to as anAuthorized Program Analysis Report (APAR).See"APARs and PTFs".IBM. APARs also include There are at least 2 levels of fix:[11] The focus of the "APAR fix" is "to rectify the problem as soon as possible"[12]whereas the PTF "is a tested APAR... The PTF 'closes' the APAR. " Prior to that, an APAR is "a problem with an IBM program that is formally tracked until a solution is provided.”[4] Customers sometimes explain the acronym in a tongue-in-cheek manner aspermanent temporary fix[13]or more practicallyprobably this fixes, because they have the option to make the PTF a permanent part of the operating system if the patch fixes the problem. One explanation of Program Temporary Fix says it's temporary, just until the next ice age. Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/IBM_Program_temporary_fix
Abug bounty programis a deal offered by many websites, organizations, and software developers by which individuals can receive recognition and compensation[1]for reportingbugs, especially those pertaining tosecurityvulnerabilities.[2]If no financial reward is offered, it is called avulnerability disclosure program.[3][4] These programs, which can be considered a form ofcrowdsourcedpenetration testing,[5]grant permission for unaffiliated individuals—called bug bounty hunters,[6]white hatsorethical hackers[7]—to find and report vulnerabilities.[3]If the developers discover andpatchbugs before the general public is aware of them, cyberattacks that might haveexploitedare no longer possible.[3] Participants in bug bounty programs come from a variety of countries, and although a primary motivation is monetary reward, there are a variety of other motivations for participating. Hackers could earn much more moneyfor sellingundisclosedzero-day vulnerabilitiesto brokers,spywarecompanies, or government agencies instead of the software vendor. If they search for vulnerabilities outside the scope of bug bounty programs, they might find themselves facing legal threats undercybercrimelaws. The scale of bug bounty programs increased dramatically in the late 2010s. Some large companies and organizations run and operate their own bug bounty programs, including Microsoft, Facebook, Google,Mozilla, theEuropean Union,[8]and theUnited States federal government.[9]Other companies offer bug bounties via platforms such asHackerOne. In 1851, Alfred Charles Hobbs was paid USD$20,000 (adjusted for inflation) to pick a lock.[10]In 1995,Netscapelaunched the first bug bounty program, for thebetaversion of its Netscape Navigator 2.0 browser.[10][11][12]Later on, other enterprises opened their own bug bounty programs. These were supplemented bycrowdsourcingplatforms that made it easier for professionals to find bug bounties.[10] Despite developers' goal of delivering a product that works entirely as intended, virtually allsoftwarecontains bugs.[13][5]If a bug creates a security risk, it is called avulnerability, and if the vendor is unaware of it, it is called azero-day.[14][15]Vulnerabilities vary in their ability to beexploitedby malicious actors. Some are not usable at all, while others can be used to disrupt the device with adenial of service attack. The most valuable allow the attacker toinjectand run their own code, without the user being aware of it.[16]Theharms of an attack can be severe.[17] Organizations seeking to improve security test their systems to see if they can be breached.[5]Many contract with external services that conductpenetration testing, but this is not enough to find all vulnerabilities, motivating some companies to supplement with crowdsourced information.[3]Many companies are skeptical of third-party reports,[18]afraid that these programs will increase malicious activity, cost too much money, or bring fraudulent reports. Alternatively, bug bounty programs might be ignored because of confidence in their application's security or in favor of other security measures.[19]Some studies have found that the cost per vulnerability found is much lower via bounty programs rather than by hiring software engineers to search for vulnerabilities.[18] The size of the reward offered varies on such factors such as the size of the company, the difficulty of finding the vulnerability, and how severe its effects could be if exploited.[6]Successful bug bounty hunters can often make more thansoftware developers.[20]Many bug bounty programs are focused onweb applications.[21] In August 2013, aPalestiniancomputer science student reported a vulnerability that allowed anyone to post a video on an arbitrary Facebook account. According to the email communication between the student and Facebook, he attempted to report the vulnerability using Facebook's bug bounty program but the student was misunderstood by Facebook's engineers. Later he exploited the vulnerability using the Facebook profile ofMark Zuckerberg, resulting in Facebook refusing to pay him a bounty.[22] Facebookstarted paying researchers who find and report security bugs by issuing them custom branded "White Hat" debit cards that can be reloaded with funds each time the researchers discover new flaws.[23] In 2016,Uberexperienced a security incident when an individual accessed the personal information of 57 million Uber users worldwide. The individual supposedly demanded a ransom of $100,000 in order to destroy rather than publish the data. In Congressional testimony, Uber CISO indicated that the company verified that the data had been destroyed before paying the $100,000.[24]Uber's Chief Information Security Officer expressed regret for not disclosing the incident in 2016. As part of their response, Uber worked withHackerOneto update their bug bounty program policies to explain good faith vulnerability research and disclosure.[25] Yahoo!was severely criticized for sending out Yahoo! T-shirts as reward to the Security Researchers for finding and reporting security vulnerabilities in Yahoo!.[26]WhenEcavareleased the first known bug bounty program forICSin 2013,[27][28]they were criticized for offering store credits instead of cash which does not incentivize security researchers.[29]Ecava explained that the program was intended to be initially restrictive and focused on the human safety perspective for the users ofIntegraXor SCADA, their ICS software.[27][28] Some bug bounties programs require researchers to sign anon-disclosure agreementto receive pay or safe harbor benefits from the bug bounty program. This practice has been criticized on ethical grounds as enabling the company to sweep knowledge of vulnerabilities under the rug.[30][31][32] Because submissions are open to anyone, a large number of reports (estimated at 50-70 percent forHackerOne, the largest platform) are invalid.[33][34]One study found that the largest number of reports were rejected as previously known vulnerabilities, followed byfalse positives, out-of-scope, duplicates, and for lack ofproof-of-concept. Another study found that bounty programs offering more money received a higher number of valid reports.[35]One cause of invalid reports is that it may be easier for hackers to submit a report rather than do additional work to check their solution.[36]Some bug bounty platforms, including HackerOne, have implemented measures to cut down on the number of invalid reports.[36]Bug bounty programs may be invite-only to trusted security researchers instead of public.[37]To validate the vulnerability and receive an award, the hacker usually has to create anexploitto prove that the vulnerability found is a genuinesecurity bug.[6]The most commonly reported vulnerabilities in bug bounty programs includeSQL injection,cross-site scripting(XSS), and design flaws.[38] Participants in bug bounty programs come from a variety of countries. In a survey of hackers on theHackerOneplatform, 19 percent gave their location as the United States.[32]Anyone can make reports, regardless of their educational background and age.[39]The majority of reports come from a relatively small number of hackers.[40]The number of reporters and reports has increased dramatically in the late 2010s.[41] Although the most-reported motivation of bug bounty participants is the financial reward from reporting,[42]other motivating factors include the potential for recognition, intellectual challenge, learning, and job opportunities.[43][3][7]A 2017 study published inJournal of Cybersecurityfound that newer bug bounty programs attracted more researchers, despite older ones offering higher financial rewards.[44] In October 2013,Googleannounced a major change to its Vulnerability Reward Program. Previously, it had been a bug bounty program covering many Google products. With the shift, however, the program was broadened to include a selection of high-riskfree softwareapplications andlibraries, primarily those designed fornetworkingor for low-leveloperating systemfunctionality. Submissions that Google found adherent to the guidelines would be eligible for rewards ranging from $500 to $3,133.70.[45][46]In 2017, Google expanded their program to cover vulnerabilities found in applications developed by third parties and made available through the Google Play Store.[47]Google's Vulnerability Rewards Program now includes vulnerabilities found in Google, Google Cloud, Android, and Chrome products, and rewards up to $31,337.[48] MicrosoftandFacebookpartnered in November 2013 to sponsor The Internet Bug Bounty, a program to offer rewards for reporting hacks and exploits for a broad range of Internet-related software.[49]In 2017,GitHuband TheFord Foundationsponsored the initiative, which is managed by volunteers including from Uber, Microsoft,[50]Adobe, HackerOne, GitHub, NCC Group, and Signal Sciences.[51] In March 2016,Peter Cookannounced the US federal government's first bug bounty program, the "Hack the Pentagon" program.[52] In 2019, TheEuropean Commissionannounced the EU-FOSSA 2 bug bounty initiative for popularopen sourceprojects, includingDrupal,Apache Tomcat,VLC,7-zipandKeePass. The project was co-facilitated by European bug bounty platform Intigriti and HackerOne and resulted in a total of 195 unique and valid vulnerabilities.[53] There are some platforms—the largest beingHackerOne—that run bug bounty programs on behalf of software vendors and pay rewards set by the vendor.[8]Others includeCobalt,Bugcrowd, and Synact.[54][55][56]Open Bug Bountyis a crowd security bug bounty program established in 2014 that allows individuals to post website and web application security vulnerabilities in the hope of a reward from affected website operators.[57] As of 2021[update], most quantitative research on bug bounty programs has focused on publicly accessible datasets. There has not been published research into bug bounties forsafety-critical systems, which have become increasingly connected to the Internet. Most of the existing research is quantitative and created by computer science experts, with a lack of multidisciplinary perspectives incorporating the insights of such fields as economics, law and philosophy.[42] Vulnerability discovery is similar in many respects tocyberattack. The actions of even well-intentioned hackers may breach criminal laws passed to prosecute cybercriminals. Most hackers are not legal experts and lack of knowledge of the law in their jurisdiction.[58]It is common for vulnerability discoverers to receive legal threats after disclosing a vulnerability.[59] Although nearly all bug bounty programs promise asafe harborfor reports complying with their policies,[58]if the discovered vulnerability does not fall into a previously established bug bounty program, the company involved could report it as an illegal cyberattack.[58][59]In China, some vulnerability reporters have been arrested and prosecuted, including the leaders ofWooYun—the oldest and largest vulnerability reporting platform in the country.[58] Not all companies respond positively to disclosures, as they can cause legal liability and operational overhead. It is not uncommon to receivecease-and-desistletters from software vendors after disclosing a vulnerability for free.[60]Some individuals who find a previously unknown,zero-day vulnerabilitydo not sell it to the vendor directly or indirectly via a third-party bug bounty program. According to one study, the most commonly cited reasons for not reporting a bug were threatening language on the website, lack of an obvious place to report, and lack of response to earlier bug reports.[61] Discoverers can earn more money—more than USD$1 million in some cases—by selling the vulnerability to brokers such asZerodium,spywarecompanies such asNSO Group, governments, or intelligence agencies. Government agencies may use the vulnerability to cause acyberattack, stockpile the vulnerability, or notify the vendor.[62][15][8]Some hackers also sell the vulnerability they found to a criminal group.[63]In 2015, the markets for government and crime were estimated at at least ten times larger than the bug bounty market.[62]
https://en.wikipedia.org/wiki/Bug_bounty_program
Locksmithingis the work of creating and bypassing locks. Locksmithing is a traditional trade and in many countries requires completion of anapprenticeship. The level of formal education legally required varies by country, ranging from no formal education to a training certificate awarded by an employer, or a fulldiplomafrom anengineeringcollege, along with time spent as anapprentice. Alockis a mechanism that secures buildings, rooms, cabinets, objects, or other storage facilities. A "smith" is a metalworker who shapes metal pieces, often using aforgeormould, into useful objects or to be part of a more complex structure. Thus locksmithing, as its name implies, is the assembly and designing of locks and their respective keys by hand. Most locksmiths use both automatic and manual cutting tools to mold keys, with many of these tools being powered by batteries or mains electricity. Locks have been constructed for over 2500 years, initially out of wood and later out of metal.[1]Historically, locksmiths would make the entire lock, working for hours hand cutting screws and doing much file-work. Lock designs became significantly more complicated in the 18th century, and locksmiths often specialized in repairing or designing locks. Although replacing lost keys for automobiles and homes, as well as rekeying locks for security purposes, remains an important part of locksmithing, a 1976 US Government publication noted that modern locksmiths are primarily involved in installing high-quality lock-sets and managing keying and key control systems. Most locksmiths also provide electronic lock services, such as programmingsmart keysfor transponder-equipped vehicles and implementingaccess controlsystems to protect individuals and assets for large institutions.[2]Many also specialise in other areas such as: InAustralia, prospective locksmiths are required to take a Technical and Further Education (TAFE) course in locksmithing, completion of which leads to issuance of a Level 3Australian Qualifications Frameworkcertificate, and complete an apprenticeship. They must also pass a criminal records check certifying that they are not currently wanted by the police. Apprenticeships can last one to four years. Course requirements are variable: there is a minimal requirements version that requires fewer total training units, and a fuller version that teaches more advanced skills, but takes more time to complete. Apprenticeship and course availability vary bystate or territory.[3] In Ireland, licensing for locksmiths was introduced in 2016,[4]with locksmiths having to obtain aPrivate Security Authoritylicense. The Irish Locksmith Organisation has 50 members with ongoing training to ensure all members are up-to-date with knowledge and skills. In the UK, there is no current government regulation for locksmithing, so effectively anyone can trade and operate as a locksmith with no skill or knowledge of the industry.[5] Fifteenstatesin theUnited Statesrequire licensure for locksmiths.Nassau CountyandNew York Cityin New York State, andHillsborough CountyandMiami-Dade Countyin Florida have their own licensing laws.[6]State and local laws are described in the table below. 15 states require locksmith licensing: Alabama, California, Connecticut, Illinois, Louisiana, Maryland, Nebraska, New Jersey, Nevada, North Carolina, Oklahoma, Oregon, Tennessee, Texas and Virginia Locksmiths may be commercial (working out of a storefront), mobile (working out of a vehicle), institutional (employed by an institution) or investigatory (forensic locksmiths) or may specialize in one aspect of the skill, such as an automotive lock specialist, a master key system specialist or a safe technician.[2]Many locksmiths also work as security consultants, but not all security consultants possess locksmithing skills. Locksmiths are frequently certified in specific skill areas or to a level of skill within the trade. This is separate from certificates of completion of training courses. In determining skill levels, certifications from manufacturers or locksmith associations are usually more valid criteria than certificates of completion. Some locksmiths decide to call themselves "Master Locksmiths" whether they are fully trained or not, and some training certificates appear quite authoritative. The majority of locksmiths also work on any existing door hardware, not just locking mechanisms. This includes door closers, door hinges, electric strikes, frame repairs and other door hardware. The issue offull disclosurewas first raised in the context of locksmithing, in a 19th-century controversy regarding whether weaknesses in lock systems should be kept secret in the locksmithing community, or revealed to the public. According toA. C. Hobbs: A commercial, and in some respects a social doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery. Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock, let it have been made in whatever country, or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance. It cannot be too earnestly urged that an acquaintance with real facts will, in the end, be better for all parties. Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear, milkmen knew all about it before, whether they practised it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased.
https://en.wikipedia.org/wiki/Locksmith
MalwareMustDie(MMD), NPO[1][2]is awhite hat hackingresearch workgroup that was launched in August 2012. MalwareMustDie is a registerednonprofit organizationas a medium for IT professionals and security researchers gathered to form a work flow to reducemalwareinfection in theinternet. The group is known for their malware analysis blog.[3]They have a list[4]ofLinux malwareresearch and botnet analysis that they have completed. The team communicates information about malware in general and advocates for better detection forLinux malware.[5] MalwareMustDie is also known for their efforts in original analysis for a new emerged malware or botnet, sharing of their found malware source code[6]to the law enforcement and security industry, operations to dismantle several malicious infrastructure,[7][8]technical analysis on specific malware's infection methods and reports for the cyber crime emerged toolkits. Several notable internet threats that were first discovered and announced by MalwareMustDie are: MalwareMustDie has also been active in analysis for client vector threat's vulnerability. For example,Adobe FlashCVE-2013-0634(LadyBoyle SWF exploit)[56][57]and other undisclosed Adobe vulnerabilities in 2014 have received Security Acknowledgments for Independent Security Researchers from Adobe.[58]Another vulnerability researched by the team was reverse engineering a proof of concept for a backdoor case (CVE-2016-6564) of one brand of Android phone device that was later found to affect 2 billion devices.[59] Recent activity of the team still can be seen in several noted threat disclosures, for example, the "FHAPPI" state-sponsored malware attack,[60]the finding of first ARC processor malware,[61][62][63]and "Strudel" threat analysis (credential stealing scheme).[64]The team continues to post new Linux malware research on Twitter and their subreddit. MalwareMustDie compares their mission to theCrusades, emphasizing the importance of fighting online threats out of a sense of moral duty. Many people have joined the group because they want to help the community by contributing to this effort.[65]
https://en.wikipedia.org/wiki/MalwareMustDie
Wireless identity theft, also known ascontactless identity theftorRFIDidentity theft, is a form of identity theft described as "the act of compromising an individual’s personal identifying information using wireless (radio frequency) mechanics."[1]Numerous articles have been written about wireless identity theft and broadcast television has produced several investigations of this phenomenon.[2][3][4]According toMarc Rotenbergof theElectronic Privacy Information Center, wireless identity theft is a serious issue as the contactless (wireless) card design is inherently flawed, increasing the vulnerability to attacks.[5] Wireless identity theft is a relatively new technique for gathering individuals' personal information from RF-enabled cards carried on a person in theiraccess control, credit, debit, or government issued identification cards.[6]Each of these cards carry a radio frequency identification chip which responds to certain radio frequencies. When these "tags" come into contact with radio waves, they respond with a slightly altered signal. The response can contain encoded personally identifying information, including the card holder's name, address, Social Security Number, phone number, and pertinent account or employee information.[7] Upon capturing (or ‘harvesting’) this data, one is then able to program other cards to respond in an identical fashion (‘cloning’). Many websites are dedicated to teaching people how to do this, as well as supplying the necessary equipment and software.[8][9] The financial industrial complex is migrating[as of?]from the use of magnetic stripes on debit and credit cards which technically require a swipe through a magnetic card swipe reader. The number of transactions per minute can be increased, and more transactions can be processed in a shorter time, therefore making for arguably shorter lines at the cashier.[10] Academic researchers and‘White-Hat’ hackershave analysed and documented the covert theft ofRFIDcredit card information and been met with both denials and criticisms from RFID card-issuing agencies.[1][11]Nevertheless, after public disclosure of information that could be stolen by low-cost jerry-rigged detectors which were used to scan cards in mailing envelopes (and in other studies also even via drive-by data attacks), the design of security features on various cards was upgraded to remove card owners’ names and other data.[1][11]Additionally, a number of completely unencrypted card designs were converted to encrypted data systems.[1][11] The issues raised in a 2006 report were of importance due to the tens of millions of cards that have already been issued.[1][11]Creditanddebit carddata could be stolen via special low cost radio scanners without the cards being physically touched or removed from their owner's pocket, purse or carry bag.[1][11]Among the findings of the 2006 research study "Vulnerabilities in First-Generation RFID-Enabled Credit Cards", and in reports by other white-hat hackers: In a related issue, privacy groups and individuals have also raised "Big Brother" concerns, where there is a threat to individuals from their aggregated information and even tracking of their movements by either card issuing agencies, other third party entities, and even by governments.[12]Industry observers have stated that ‘...RFID certainly has the potential to be the most invasive consumer technology ever.’[12] Credit card issuing agencies have issued denial statements regarding wireless identity theft or fraud and provided marketing information that either directly criticized or implied that: After the release of the study results, all of the credit card companies contacted during theNew York Times'investigative report said that they were removing card holder names from the data being transmitted with their new second generation RFID cards.[5][11] Certain official identification documents issued by the U.S. government,U.S. Passports, Passport Cards, and also enhanced driver's licenses issued by States of New York and Washington, contain RFID chips for the purpose of assisting those policing the U.S. border.[13]Various security issues have been identified with their use, including the ability ofblack hatsto harvest their identifier numbers at a distance and apply them to blank counterfeit documents and cards, thus assuming those people's identifiers.[13] Various issues and potential issues with their use have been identified, including privacy concerns. Although the RFID identifier number associated with each document is not supposed to include personal identification information, "...numbers evolve over time, and uses evolve over time, and eventually these things can reveal more information than we initially expect" stated Tadayoshi Kohno, an assistant professor of computer science, atUniversity of Washingtonwho participated in a study of such government issued documents.[13]
https://en.wikipedia.org/wiki/Wireless_identity_theft
AnAdaptation kit upgrade(AKU or Adaptation kit update) updates Microsoft'sWindows Mobileoperating systemsfor devices (excludingWindows 10 Mobile). It is a collection of updates, fixes and enhancements to the tools delivered to hardware device manufacturers to create or update devices based on a specific platform. This term is used by Microsoft to designate the current update version for a particular embedded platform such as Windows Mobile which is used forpersonal digital assistantsandsmartphonedevices. On the Windows Mobile platform the AKU is to Windows Mobile what aservice packis toMicrosoft Windows. Microsoft releases AKUs to allow device manufacturers with updates to create new products or fix issues with older products. These enhancements are usually not available to the consumer or end user unless released as a firmware update from the vendor. Latest and the last update for Windows Mobile from Microsoft. Released in December 2010. AKU Build 23148 AKU Build 23145 AKU Build 21840 It is important to note that Windows Mobile is succeeded by Windows Phone in consumer electronics segment and Windows Embedded Handheld for industrial electronics. Windows Embedded Handheld starts with AKU 5.3.12. Windows Embedded Handheld is meant to carry on the Windows Mobile for rugged handhelds and is only a change in AKU over Windows Mobile 6.5. Unlike Windows Phone, WEH is compatible with WM 6.5.[6][7] AKU Build 29049 AKU Build 29036 AKU Build 29023
https://en.wikipedia.org/wiki/Adaptation_kit_upgrade
Advanced Package Tool(APT) is afree-softwareuser interfacethat works withcore librariesto handle the installation and removal of software onDebianand Debian-basedLinux distributions.[4]APT simplifies the process of managing software onUnix-likecomputer systems by automating the retrieval, configuration and installation ofsoftware packages, either from precompiled files or bycompilingsource code.[4] APT is a collection of tools distributed in a package namedapt. A significant part of APT is defined in aC++library of functions; APT also includes command-line programs for dealing with packages, which use the library. Three such programs areapt,apt-getandapt-cache. They are commonly used in examples because they are simple and ubiquitous. Theaptpackage is of "important" priority in all current Debian releases, and is therefore included in a default Debian installation. APT can be considered afront endtodpkg, friendlier than the olderdselectfront end. Whiledpkgperforms actions on individual packages, APT manages relations (especially dependencies) between them, as well as sourcing and management of higher-level versioning decisions (release tracking andversion pinning). APT is often hailed as one of Debian's best features,[by whom?][5][6][7][8]which Debian developers attribute to the strict quality controls in Debian's policy.[9][10] A major feature of APT is the way it callsdpkg— it doestopological sortingof the list of packages to be installed or removed and callsdpkgin the best possible sequence. In some cases, it utilizes the--forceoptions ofdpkg. However, it only does this when it is unable to calculate how to avoid the reasondpkgrequires the action to be forced. The user indicates one or more packages to be installed. Each package name is phrased as just the name portion of the package, not a fully qualified filename (for instance, in a Debian system,libc6would be the argument provided, notlibc6_1.9.6-2.deb). Notably, APT automatically gets and installs packages upon which the indicated package depends (if necessary). This was an original distinguishing characteristic of APT-based package management systems, as it avoided installation failure due to missing dependencies, a type ofdependency hell. Another distinction is the retrieval of packages from remote repositories. APT uses a location configuration file (/etc/apt/sources.list) to locate the desired packages, which might be available on the network or a removable storage medium, for example, and retrieve them, and also obtain information about available (but not installed) packages. APT provides other command options to override decisions made by apt-get's conflict resolution system. One option is to force a particular version of a package. This can downgrade a package and render dependent software inoperable, so the user must be careful. Finally, theapt_preferencesmechanism allows the user to create an alternative installation policy for individual packages. The user can specify packages using a POSIXregular expression. APT searches its cached list of packages and lists the dependencies that must be installed or updated. APT retrieves, configures and installs the dependencies automatically. Triggersare the treatment of deferred actions. Usage modes ofaptandapt-getthat facilitate updating installed packages include: /etc/aptcontains the APT configuration folders and files. apt-configis the APT Configuration Query program.[12]apt-config dumpshows the configuration.[13] APT relies on the concept ofrepositoriesin order to find software and resolve dependencies. For APT, a repository is a directory containing packages along with an index file. This can be specified as a networked orCD-ROMlocation. As of 14 August 2021,[update]the Debian project keeps a central repository of over 50,000 software packages ready for download and installation.[15] Any number of additional repositories can be added to APT'ssources.listconfiguration file (/etc/apt/sources.list) and then be queried by APT. Graphical front ends often allow modifyingsources.listmore simply (apt-setup). Once a package repository has been specified (like during the system installation), packages in that repository can be installed without specifying a source and will be kept up-to-date automatically. In addition to network repositories,compact discsand other storage media (USB keydrive, hard disks...) can be used as well, usingapt-cdrom[16]or addingfile:/URI[17]to the source list file.apt-cdromcan specify a folder other than a CD-ROM, using the-doption (i.e. a hard disk or a USB keydrive). The Debian CDs available for download contain Debian repositories. This allows non-networked machines to be upgraded. One can also useapt-zip. Problems may appear when several sources offer the same package(s). Systems that have such possibly conflicting sources can use APT pinning to control which sources should be preferred. TheAPT pinningfeature allows users to force APT to choose particular versions of packages which may be available in different versions from different repositories. This allows administrators to ensure that packages are not upgraded to versions which may conflict with other packages on the system, or that have not been sufficiently tested for unwelcome changes. In order to do this, thepinsin APT'spreferencesfile (/etc/apt/preferences) must be modified,[18]although graphical front ends often make pinning simpler. Several otherfront endsto APT exist, which provide more advanced installation functions and more intuitive interfaces. These include: APT front ends can: APT front ends can list the dependencies of packages being installed or upgraded, ask the administrator if packages recommended or suggested by newly installed packages should be installed too, automatically install dependencies and perform other operations on the system such as removing obsolete files and packages. The original effort that led to theapt-getprogram was thedselectreplacement project known by its codenameDeity.[24]This project was commissioned in 1997 by Brian White, the Debian release manager at the time. The first functional version ofapt-getwas calleddpkg-getand was only intended to be a test program for the core library functions that would underpin the new user interface (UI).[25] Much of the original development of APT was done onInternet relay chat(IRC), so records have been lost. The 'Deity creation team' mailing list archives include only the major highlights. The 'Deity' name was abandoned as the official name for the project due to concerns over the religious nature of the name. The APT name was eventually decided after considerable internal and public discussion. Ultimately the name was proposed on IRC, accepted and then finalized on the mailing lists.[26] APT was introduced in 1998 and original test builds were circulated on IRC. The first Debian version that included it was Debian 2.1, released on 9 March 1999.[27] In the end the original goal of the Deity project of replacing thedselectuser interface was a failure. Work on the user interface portion of the project was abandoned (the user interface directories were removed from theconcurrent versions system) after the first public release ofapt-get. The response to APT as adselectmethod and a command line utility was so great and positive that all development efforts focused on maintaining and improving the tool. It was not until much later that several independent people built user interfaces on top oflibapt-pkg. Eventually, a new team picked up the project, began to build new features and released version 0.6 of APT which introduced the Secure APT feature, using strongcryptographicsigningto authenticate the package repositories.[28] APT was originally designed as a front end fordpkgto work with Debian's.debpackages. A version of APT modified to also work with theRPM Package Managersystem was released asAPT-RPM.[29]TheFinkproject has ported APT toMac OS Xfor some of its own package management tasks,[30]and APT is also available inOpenSolaris. apt-fileis a command, packaged separately from APT, to find which package includes a specific file, or to list all files included in a package on remote repositories.[31]
https://en.wikipedia.org/wiki/Advanced_Packaging_Tool
The generically namedMacintosh Processor Upgrade Card[1](code namedSTP[2]) is acentral processing unitupgrade card sold byApple Computer, designed for manyMotorola 68040-poweredMacintosh LC,QuadraandPerformamodels. The card contains aPowerPC 601CPU and plugs into the 68040 CPU socket of the upgraded machine.[3]The Processor upgrade card required the original CPU be plugged back into the card itself, and gave the machine the ability to run in its original 68040 configuration, or through the use of a software configuration utility allowed booting as a PowerPC 601 computer running at twice the original speed in MHz (50 MHz or 66 MHz) with 32 KB of L1 Cache, 256 KB of L2 Cache and a PowerPCFloating Point Unitavailable to software. The Macintosh Processor Upgrade requires and shipped withSystem 7.5.[4] Development of the card started in July 1993.[2]The upgrade card was announced in January 1994 at the MacWorld Expo in San Francisco. Apple described the Macintosh Processor Upgrade Card as giving a performance increase of "two to four times" for general purposes, or "up to 10 times" for floating point intensive programs. While the Macintosh Processor Upgrade did not plug into the LCProcessor Direct Slot, due to power used and the space taken by the upgrade, LC PDS cards could not be fitted while the card was installed.[3]This limited the usefulness of the Processor Upgrade Card, as internal ethernet, Apple IIe compatibility, video cards and other LC PDS expansion options must be removed. The Macintosh Processor Upgrade Card can bring a 68k Mac, that can normally only go up to Mac OS 8.1, to be upgraded to Mac OS 8.6 or newer as long as the card is always in use. If the user turns off or disconnect the card, the machine will display a Sad Mac as newer versions of Mac OS aren't compatible with 68k processors. The Macintosh Processor Upgrade Card can only run up to Mac OS 9.1 as 9.2 onwards require a G3 Processor as a minimum. DayStar Digitalmanufactured the Macintosh Processor Upgrade Card for Apple, sold the same card as theirDaystar PowerCard 601-50/66and also manufactured aDaystar PowerCard 601/100which reached 100 MHz.[5]After Daystar went out of business the 100 MHz model was manufactured and sold bySonnet Technologiesas theirSonnet Presto PPC 605.[6]
https://en.wikipedia.org/wiki/Macintosh_Processor_Upgrade_Card
In the jargon ofcomputer programming, asource upgradeis a modification of acomputer program'ssource code, which adds new features and options to it, improves performance and stability, or fixesbugsand errors from the previousversion. There are two popular types of source upgrades, which are listed here: Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Source_upgrade
Windows Anytime Upgrade(Add Features to Windows) was a service byMicrosoftintroduced inWindows Vistathat facilitated upgrades across successiveeditions of Windows Vista.[1]Prices for upgrades purchased through Windows Anytime Upgrade were lower than prices for upgrades purchased at retail.[2][3]Windows Anytime Upgrade is included inWindows 7to allow users to upgrade toWindows 7 editions. InWindows 8andWindows 8.1it was rebranded as Add Features to Windows and was used to purchase an upgrade license for thePro editionor to addWindows Media Centerto an existing Pro installation. Support for this feature was discontinued on October 31, 2015.[4] Windows Anytime Upgrade was in development prior to thedevelopment reset of Windows Vista, then known by its codename "Longhorn." A preliminary version of the feature can be seen inbuild 4093. On February 26, 2006, Microsoft announced the editions of Windows Vista to be released to retail andoriginal equipment manufacturers(OEMs).[5][6]After this announcement, various technology-related outlets reported that Anytime Upgrade would enable users to upgrade to successive editions.[1][7][8] All editions of Windows Vista (excluding Enterprise) are stored on the same retail and OEM optical media—a license key for the edition purchased determines which edition is eligible for installation.[9]When first announced, Anytime Upgrade enabled users to purchase a digital license from an online merchant to upgrade their edition of Windows Vista. Once a license had been purchased, a user's product license, billing and other information would be stored within a user's digital locker at theWindows Marketplacedigital distributionplatform; this would allow a user to retain this information at anoff-sitelocation for reference purposes and to reinstall the operating system, if necessary.[10]A user could then initiate an upgrade to the edition for which the license was purchased either through components stored on thehard driveby the OEM of thepersonal computer, through an Anytime UpgradeDVDsupplied by the OEM, or through retail installation media compatible with Anytime Upgrade.[11]If none of these options were available, Anytime Upgrade provided an option for a user to purchase a DVD online and have it delivered by mail.[2][3] Microsoft also released retail packaging for Anytime Upgrade. The retail products were made available during the consumer launch of Windows Vista on January 30, 2007.[10]The initial version of these products included only an upgrade license, but this was later modified in May 2007 to include both a DVD and a product license.[12]In an effort to streamline the upgrade process, Microsoft announced that digital license distribution would cease on February 20, 2008; licenses purchased prior to this date would not be affected. As a result of this change, users would be required to purchase the aforementioned retail packaging in order to use Anytime Upgrade functionality[2][13]and Windows VistaService Pack 1omitted the option to purchase a license online.[14]DVDs for Anytime Upgrade were only produced for Windows Vista. Anytime Upgrade in Windows Vista performs a full reinstallation of the new product edition while retaining the user's data, programs, and settings.[15]This process can take a considerable amount of time, up to a few hours.[2] Anytime Upgrade in Windows 7 no longer performs a full reinstallation of Windows. Components for the upgraded editions are instead pre-installed directly in the operating system; a notable result of this change is that the speed of the upgrade process has been significantly increased. Microsoft stated that an upgrade should take approximately 10 minutes.[14]Anytime Upgrade also does not require physical media or additional software.[16][15]Instead, Windows 7 requires a user to purchase a license online, in a manner similar to the initial functionality that was later removed from Windows Vista starting with Service Pack 1.[14]Microsoft would also release Anytime Upgrade packaging for Windows 7 at retail. The packaging, however, would only include a license for the edition to be upgraded, as Anytime Upgrade in the operating system does not require physical media.[17] In Windows 8, the process has changed. Users will need to go to the Control Panel and search for Add Features to Windows. In Windows 10, this is located in Settings > System > About > Change Product Key or Upgrade Your Version of Windows. This process works the same way as in Windows 7, with a few exceptions: When first announced, Anytime Upgrade was available in theUnited States,Canada,EMEA,European Union,Norway,Switzerland, andJapan, with Microsoft stating that availability of the program would expand after launch of Windows Vista.[11]English version retail packaging for Anytime Upgrade was made available at the consumer launch of Windows Vista for North America andAsia-Pacificregions.[12] In 2009,Ars Technicareported that Anytime Upgrade retail packaging for Windows 7 may only have been available in regions without broadband Internet access or where retail packaging was ineligible to be offered.[17]Anytime Upgrade was available for Windows 7 in select regions.[18]
https://en.wikipedia.org/wiki/Windows_Anytime_Upgrade
TheYellowdog Updater Modified(YUM) is afree and open-sourcecommand-linepackage-managementutility for computers running theLinuxoperating systemusing theRPM Package Manager.[4]Though YUM has a command-line interface, several other tools providegraphical user interfacesto YUM functionality. YUM allows for automatic updates and package and dependency management on RPM-based distributions.[5]Like theAdvanced Package Tool(APT) fromDebian, YUM works withsoftware repositories(collections of packages), which can be accessed locally[6]or over a network connection. Under the hood, YUM depends onRPM, which is a packaging standard fordigital distributionof software, which automatically useshashesanddigital signaturesto verify the authorship and integrity of said software; unlike someapp stores, which serve a similar function, neither YUM nor RPM provide built-in support forproprietary restrictionson copying of packages by end-users. YUM is implemented as libraries in thePythonprogramming language, with a small set of programs that provide acommand-line interface.[7]GUI-based wrappers such as YUM Extender (yumex) also exist,[8]and has been adopted for Fedora Linux until version 22.[9] A rewrite of YUM namedDNFreplaced YUM as the default package manager inFedora 22[9](in 2015). This was required due to Fedora's transition from Python 2 to Python 3, which is not supported by YUM.[10]DNF also improves on YUM in several ways - improved performance, better resolution of dependency conflicts, and easier integration with other software applications.[11]FromRHEL 8, yum is an alias forDNF.[12] The original package manager, Yellowdog UPdater (YUP) was developed in 1999-2001 by Dan Burcaw, Bryan Stillwell, Stephen Edie, and Troy Bengegerdes atTerra Soft Solutions(under the leadership of then CEOKai Staats) as a back-end engine for a graphical installer ofYellow Dog Linux.[4] As a full rewrite of YUP, YUM evolved primarily to update and manageRed Hat Linuxsystems used at theDuke UniversityDepartment of Physics by Seth Vidal and Michael Stenner. Vidal continued to contribute to YUM until his death in aDurham, North Carolinabicycle accident on 8 July 2013.[13][14][15] In 2003Robert G. Brownat Duke published documentation for YUM.[7]Subsequent adopters included[7]Fedora,Rocky Linux,AlmaLinux,CentOS, and many other RPM-basedLinux distributions, includingYellow Dog Linuxitself, where YUM replaced the original YUP utility — last updated onSourceForgein 2001.[16]By 2005, it was estimated to be in use on over half of theLinuxmarket,[3]and by 2007 YUM was considered the "tool of choice" for RPM-based Linux distributions.[17] YUM aimed to address both the perceived deficiencies in the oldAPT-RPM,[18]and restrictions of the Red Hatup2datepackage management tool. YUM superseded up2date in Red Hat Enterprise Linux 5 and later.[19]Some authors refer to YUM as the Yellowdog Update Manager, or suggest that "Your Update Manager" would be more appropriate.[20][21]A basic knowledge of YUM is often included as a requirement for Linux system-administrator certification.[5]TheGNU General Public Licenseof YUM allows thefree and open-source softwareto be freely distributed and modified without any royalty, if other terms of the license are honored.[4] While yum was originally created for Linux, it has been ported to a number of other operating systems includingAIX,[22]IBM i,[23]andArcaOS.[24] YUMcan perform operations such as: The 2.x versions of YUM feature an additional interface for programming extensions in Python that allows the behavior of YUM to be altered. Certain plug-ins are installed by default.[26]A commonly installed[27]packageyum-utils, contains commands which use the YUM API, and many plugins. Graphical user interfaces, known as "front-ends", allow easier use of YUM.PackageKitand Yum Extender (yumex) are two examples.[8]Yum Extender was deprecated for a while when Fedora migrated to DNF,[28]but it was rewritten in Python 3 and Gtk 3 and has been in progress for development. This brand-new Yum Extender is available for Fedora 34 or newer.[29] Information about packages (as opposed to the packages themselves) is known asmetadata. These metadata are combined with information in each package to determine (and resolve, if possible) dependencies among the packages. The hope is to avoid a situation known asdependency hell. A separate tool,createrepo, sets up YUMsoftware repositories, generating the necessary metadata in a standardXMLformat (and theSQLitemetadata if given the -d option).[30][31]Themrepotool (formerly known as Yam) can help in the creation and maintenance of repositories.[32] YUM's XML repository, built with input from many other developers, quickly became the standard for RPM-based repositories.[31]Besides the distributions that use YUM directly,SUSE Linux10.1[33]added support for YUM repositories inYaST, and theOpen Build Servicerepositories use the YUM XML repository format metadata.[31] YUM automatically synchronizes the remote meta data to the local client, with other tools opting to synchronize only when requested by the user. Having automatic synchronization means that YUM cannot fail due to the user failing to run a command at the correct interval.[34][35]
https://en.wikipedia.org/wiki/Yellow_dog_Updater,_Modified
Incomputer programming,redundant codeissource codeor compiled code in acomputer programthat is unnecessary, such as: ANOPinstruction might be considered to be redundant code that has been explicitly inserted to pad out theinstructionstream or introduce a time delay, for example to create atiming loopby "wasting time".Identifiersthat are declared, but never referenced, are termedredundant declarations. The following examples are inC. The secondiX*2expression is redundant code and can be replaced by a reference to the variableiY. Alternatively, the definitionint iY = iX*2can instead be removed. Consider: As a consequence of using theC preprocessor, the compiler will only see the expanded form: Because the use of min/max macros is very common, modern compilers are programmed to recognize and eliminate redundancy caused by their use. There is no redundancy, however, in the following code: If the initial call to rand(), modulo range, is greater than or equal to cutoff, rand() will be called a second time for a second computation of rand()%range, which may result in a value that is actually lower than the cutoff. The max macro thus may not produce the intended behavior for this function.
https://en.wikipedia.org/wiki/Redundant_code
Inmathematicsandcomputer science,[1]computer algebra, also calledsymbolic computationoralgebraic computation, is a scientific area that refers to the study and development ofalgorithmsandsoftwarefor manipulatingmathematical expressionsand othermathematical objects. Although computer algebra could be considered a subfield ofscientific computing, they are generally considered as distinct fields because scientific computing is usually based onnumerical computationwith approximatefloating point numbers, while symbolic computation emphasizesexactcomputation with expressions containingvariablesthat have no given value and are manipulated as symbols. Softwareapplications that perform symbolic calculations are calledcomputer algebra systems, with the termsystemalluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a userprogramming language(usually different from the language used for the implementation), a dedicated memory manager, auser interfacefor the input/output of mathematical expressions, and a large set ofroutinesto perform usual operations, like simplification of expressions,differentiationusing thechain rule,polynomial factorization,indefinite integration, etc. Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as inpublic key cryptography, or for somenon-linearproblems. Some authors distinguishcomputer algebrafromsymbolic computation, using the latter name to refer to kinds of symbolic computation other than the computation with mathematicalformulas. Some authors usesymbolic computationfor the computer-science aspect of the subject andcomputer algebrafor the mathematical aspect.[2]In some languages, the name of the field is not a direct translation of its English name. Typically, it is calledcalcul formelin French, which means "formal computation". This name reflects the ties this field has withformal methods. Symbolic computation has also been referred to, in the past, assymbolic manipulation,algebraic manipulation,symbolic processing,symbolic mathematics, orsymbolic algebra, but these terms, which also refer to non-computational manipulation, are no longer used in reference to computer algebra. There is nolearned societythat is specific to computer algebra, but this function is assumed by thespecial interest groupof theAssociation for Computing MachinerynamedSIGSAM(Special Interest Group on Symbolic and Algebraic Manipulation).[3] There are several annual conferences on computer algebra, the premier beingISSAC(International Symposium on Symbolic and Algebraic Computation), which is regularly sponsored by SIGSAM.[4] There are several journals specializing in computer algebra, the top one beingJournal of Symbolic Computationfounded in 1985 byBruno Buchberger.[5]There are also several other journals that regularly publish articles in computer algebra.[6] Asnumerical softwareis highly efficient for approximatenumerical computation, it is common, in computer algebra, to emphasizeexactcomputation with exactly represented data. Such an exact representation implies that, even when the size of the output is small, the intermediate data generated during a computation may grow in an unpredictable way. This behavior is calledexpression swell.[7]To alleviate this problem, various methods are used in the representation of the data, as well as in the algorithms that manipulate them.[8] The usual number systems used innumerical computationarefloating pointnumbers andintegersof a fixed, bounded size. Neither of these is convenient for computer algebra, due to expression swell.[9]Therefore, the basic numbers used in computer algebra are the integers of the mathematicians, commonly represented by an unbounded signed sequence ofdigitsin somebase of numeration, usually the largest base allowed by themachine word. These integers allow one to define therational numbers, which areirreducible fractionsof two integers. Programming an efficient implementation of the arithmetic operations is a hard task. Therefore, most freecomputer algebra systems, and some commercial ones such asMathematicaandMaple,[10][11]use theGMP library, which is thus ade factostandard. Except fornumbersandvariables, everymathematical expressionmay be viewed as the symbol of an operator followed by asequenceof operands. In computer-algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, and a matrix may be represented as an expression with "matrix" as an operator and its rows as operands. Even programs may be considered and represented as expressions with operator "procedure" and, at least, two operands, the list of parameters and the body, which is itself an expression with "body" as an operator and a sequence of instructions as operands. Conversely, any mathematical expression may be viewed as a program. For example, the expressiona+bmay be viewed as a program for the addition, withaandbas parameters. Executing this program consists ofevaluatingthe expression for given values ofaandb; if they are not given any values, then the result of the evaluation is simply its input. This process of delayed evaluation is fundamental in computer algebra. For example, the operator "=" of the equations is also, in most computer algebra systems, the name of the program of the equality test: normally, the evaluation of an equation results in an equation, but, when an equality test is needed, either explicitly asked by the user through an "evaluation to a Boolean" command, or automatically started by the system in the case of a test inside a program, then the evaluation to a Boolean result is executed. As the size of the operands of an expression is unpredictable and may change during a working session, the sequence of the operands is usually represented as a sequence of eitherpointers(like inMacsyma)[13]or entries in ahash table(like inMaple). The raw application of the basic rules ofdifferentiationwith respect toxon the expressionaxgives the result A simpler expression than this is generally desired, and simplification is needed when working with general expressions. This simplification is normally done throughrewriting rules.[14]There are several classes of rewriting rules to be considered. The simplest are rules that always reduce the size of the expression, likeE−E→ 0orsin(0) → 0. They are systematically applied in computer algebra systems. A difficulty occurs withassociative operationslike addition and multiplication. The standard way to deal with associativity is to consider that addition and multiplication have an arbitrary number of operands; that is, thata+b+cis represented as"+"(a,b,c). Thusa+ (b+c)and(a+b) +care both simplified to"+"(a,b,c), which is displayeda+b+c. In the case of expressions such asa−b+c, the simplest way is to systematically rewrite−E,E−F,E/Fas, respectively,(−1)⋅E,E+ (−1)⋅F,E⋅F−1. In other words, in the internal representation of the expressions, there is no subtraction nor division nor unary minus, outside the representation of the numbers. Another difficulty occurs with thecommutativityof addition and multiplication. The problem is to quickly recognize thelike termsin order to combine or cancel them. Testing every pair of terms is costly with very long sums and products. To address this,Macsymasorts the operands of sums and products into an order that places like terms in consecutive places, allowing easy detection. InMaple, ahash functionis designed for generating collisions when like terms are entered, allowing them to be combined as soon as they are introduced. This allows subexpressions that appear several times in a computation to be immediately recognized and stored only once. This saves memory and speeds up computation by avoiding repetition of the same operations on identical expressions. Some rewriting rules sometimes increase and sometimes decrease the size of the expressions to which they are applied. This is the case for thedistributive lawortrigonometric identities. For example, the distributive law allows rewriting(x+1)4→x4+4x3+6x2+4x+1{\displaystyle (x+1)^{4}\rightarrow x^{4}+4x^{3}+6x^{2}+4x+1}and(x−1)(x4+x3+x2+x+1)→x5−1.{\displaystyle (x-1)(x^{4}+x^{3}+x^{2}+x+1)\rightarrow x^{5}-1.}As there is no way to make a good general choice of applying or not such a rewriting rule, such rewriting is done only when explicitly invoked by the user. For the distributive law, the computer function that applies this rewriting rule is typically called "expand". The reverse rewriting rule, called "factor", requires a non-trivial algorithm, which is thus a key function in computer algebra systems (seePolynomial factorization). Some fundamental mathematical questions arise when one wants to manipulatemathematical expressionsin a computer. We consider mainly the case of themultivariaterational fractions. This is not a real restriction, because, as soon as theirrational functionsappearing in an expression are simplified, they are usually considered as new indeterminates. For example, is viewed as a polynomial insin⁡(x+y){\displaystyle \sin(x+y)}andlog⁡(z2−5){\displaystyle \log(z^{2}-5)}. There are two notions of equality formathematical expressions.Syntactic equalityis the equality of their representation in a computer. This is easy to test in a program.Semantic equalityis when two expressions represent the same mathematical object, as in It is known fromRichardson's theoremthat there may not exist an algorithm that decides whether two expressions representing numbers are semantically equal if exponentials and logarithms are allowed in the expressions. Accordingly, (semantic) equality may be tested only on some classes of expressions such as thepolynomialsandrational fractions. To test the equality of two expressions, instead of designing specific algorithms, it is usual to put expressions in somecanonical formor to put their difference in anormal form, and to test the syntactic equality of the result. In computer algebra, "canonical form" and "normal form" are not synonymous.[15]Acanonical formis such that two expressions in canonical form are semantically equal if and only if they are syntactically equal, while anormal formis such that an expression in normal form is semantically zero only if it is syntactically zero. In other words, zero has a unique representation as an expression in normal form. Normal forms are usually preferred in computer algebra for several reasons. Firstly, canonical forms may be more costly to compute than normal forms. For example, to put a polynomial in canonical form, one has to expand every product through thedistributive law, while it is not necessary with a normal form (see below). Secondly, it may be the case, like for expressions involving radicals, that a canonical form, if it exists, depends on some arbitrary choices and that these choices may be different for two expressions that have been computed independently. This may make the use of a canonical form impractical. Early computer algebra systems, such as theENIACat theUniversity of Pennsylvania, relied onhuman computersor programmers to reprogram it between calculations, manipulate its many physical modules (or panels), and feed its IBM card reader.[16]Female mathematicians handled the majority of ENIAC programming human-guided computation:Jean Jennings,Marlyn Wescoff,Ruth Lichterman,Betty Snyder,Frances Bilas, andKay McNultyled said efforts.[17] In 1960,John McCarthyexplored an extension ofprimitive recursive functionsfor computing symbolic expressions through theLispprogramming language while at theMassachusetts Institute of Technology.[18]Though his series on "Recursive functions of symbolic expressions and their computation by machine" remained incomplete,[19]McCarthy and his contributions to artificial intelligence programming and computer algebra via Lisp helped establishProject MACat the Massachusetts Institute of Technology and the organization that later became theStanford AI Laboratory(SAIL) atStanford University, whose competition facilitated significant development in computer algebra throughout the late 20th century. Early efforts at symbolic computation, in the 1960s and 1970s, faced challenges surrounding the inefficiency of long-known algorithms when ported to computer algebra systems.[20]Predecessors to Project MAC, such asALTRAN, sought to overcome algorithmic limitations through advancements in hardware and interpreters, while later efforts turned towards software optimization.[21] A large part of the work of researchers in the field consisted of revisiting classicalalgebrato increase itseffectivenesswhile developingefficient algorithmsfor use in computer algebra. An example of this type of work is the computation ofpolynomial greatest common divisors, a task required to simplify fractions and an essential component of computer algebra. Classical algorithms for this computation, such asEuclid's algorithm, proved inefficient over infinite fields; algorithms fromlinear algebrafaced similar struggles.[22]Thus, researchers turned to discovering methods of reducing polynomials (such as those over aring of integersor aunique factorization domain) to a variant efficiently computable via a Euclidean algorithm. For a detailed definition of the subject: For textbooks devoted to the subject:
https://en.wikipedia.org/wiki/Simplification_(symbolic_computation)
Incompiler theory,partial redundancy elimination(PRE) is acompiler optimizationthat eliminatesexpressionsthat are redundant on some but not necessarily all paths through a program. PRE is a form ofcommon subexpression elimination. An expression is called partially redundant if thevaluecomputed by the expression is already available on some but not all paths through a program to that expression. An expression is fully redundant if the value computed by the expression is available on all paths through the program to that expression. PRE can eliminate partially redundant expressions by inserting the partially redundant expression on the paths that do not already compute it, thereby making the partially redundant expression fully redundant. For instance, in the following code: the expressionx+4assigned tozis partially redundant because it is computed twice ifsome_conditionis true. PRE would performcode motionon the expression to yield the following optimized code: An interesting property of PRE is that it performs (a form of)common subexpression eliminationandloop-invariant code motionat the same time.[1][2]In addition, PRE can be extended to eliminateinjuredpartial redundancies, thereby effectively performingstrength reduction. This makes PRE one of the most important optimizations in optimizing compilers. Traditionally, PRE is applied to lexically equivalent expressions, but recently formulations of PRE based onstatic single assignment formhave been published that apply the PRE algorithm to values instead of expressions, unifying PRE andglobal value numbering.
https://en.wikipedia.org/wiki/Partial-redundancy_elimination
Inpropositional logic,conjunction elimination(also calledandelimination,∧ elimination,[1]orsimplification)[2][3][4]is avalidimmediate inference,argument formandrule of inferencewhich makes theinferencethat, if theconjunctionA and Bis true, thenAis true, andBis true. The rule makes it possible to shorten longerproofsby deriving one of the conjuncts of a conjunction on a line by itself. An example inEnglish: The rule consists of two separate sub-rules, which can be expressed informal languageas: and The two sub-rules together mean that, whenever an instance of "P∧Q{\displaystyle P\land Q}" appears on a line of a proof, either "P{\displaystyle P}" or "Q{\displaystyle Q}" can be placed on a subsequent line by itself. The above example in English is an application of the first sub-rule. Theconjunction eliminationsub-rules may be written insequentnotation: and where⊢{\displaystyle \vdash }is ametalogicalsymbol meaning thatP{\displaystyle P}is asyntactic consequenceofP∧Q{\displaystyle P\land Q}andQ{\displaystyle Q}is also a syntactic consequence ofP∧Q{\displaystyle P\land Q}inlogical system; and expressed as truth-functionaltautologiesortheoremsof propositional logic: and whereP{\displaystyle P}andQ{\displaystyle Q}are propositions expressed in someformal system. Thislogic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Conjunction_elimination
Incomputer science,dynamic software updating(DSU) is a field of research pertaining toupgradingprograms while they are running. DSU is not currently widely used in industry. However, researchers have developed a wide variety of systems and techniques for implementing DSU. These systems are commonly tested on real-world programs. Current operating systems and programming languages are typically not designed with DSU in mind. As such, DSU implementations commonly either utilize existing tools, or implement specialtycompilers. These compilers preserve the semantics of the original program, but instrument either the source code or object code to produce a dynamically updateable program. Researchers compare DSU-capable variants of programs to the original program to assess safety and performance overhead. Any running program can be thought of atuple(δ,P){\displaystyle (\delta ,P)}, whereδ{\displaystyle \delta }is the current program state andP{\displaystyle P}is the current program code. Dynamic software updating systems transform a running program(δ,P){\displaystyle (\delta ,P)}to a new version(δ′,P′){\displaystyle (\delta ',P')}. In order to do this, the state must be transformed into the representationP′{\displaystyle P'}expects. This requires astate transformerfunction. Thus, DSU transforms a program(δ,P){\displaystyle (\delta ,P)}to(S(δ),P′){\displaystyle (S(\delta ),P')}. An update is consideredvalidif and only if the running program(S(δ),P′){\displaystyle (S(\delta ),P')}can be reduced to a point tuple(δ,P′){\displaystyle (\delta ,P')}that is reachable from the starting point of the new version of the program,(δinit,P′){\displaystyle (\delta _{init},P')}.[1] The location in a program where a dynamic update occurs is referred to as anupdate point. Existing DSU implementations vary widely in their treatment of update points. In some systems, such asUpStareandPoLUS, an update can occur at any time during execution.Ginseng's compiler will attempt to infer good locations for update points, but can also use programmer-specified update points.Kitsune and Ekidenrequire developers to manually specify and name all update points. Updating systems differ in the types of program changes that they support. For example,Kspliceonly supports code changes in functions, and does not support changes to state representation. This is because Ksplice primarily targets security changes, rather than general updates. In contrast,Ekidencan update a program to any other program capable of being executed, even one written in a different programming language. Systems designers can extract valuable performance or safety assurances by limiting the scope of updates. For example, anyupdate safety checklimits the scope of updates to updates which pass that safety check. The mechanism used to transform code and state influences what kinds of updates a system will support. DSU systems, as tools, can also be evaluated on their ease-of-use and clarity to developers. Many DSU systems, such asGinseng, require programs to pass various static analyses. While these analyses prove properties of programs that are valuable for DSU, they are by nature sophisticated and difficult to understand. DSU systems that do not use a static analysis might require use of a specialized compiler. Some DSU systems require neither static analysis nor specialty compilers. Programs that are updated by a DSU system are referred to astarget programs. Academic publications of DSU systems commonly include several target programs as case studies.vsftpd,OpenSSH,PostgreSQL,Tor,Apache,GNU Zebra,memcached, andRedisare all dynamic updating targets for various systems. Since few programs are written with support for dynamic updating in mind, retrofitting existing programs is a valuable means of evaluating a DSU system for practical use. The problem space addressed by dynamic updating can be thought of as an intersection of several others. Examples includecheckpointing,dynamic linking, andpersistence. As an example, a database that must bebackward-compatiblewith previous versions of its on-disk file format, must accomplish the same type of state transformation expected of a dynamic updating system. Likewise, a program that has a plugin architecture, must be able to load and execute new code at runtime. Similar techniques are sometimes also employed for the purpose ofdynamic dead-code eliminationto remove conditionallydeadorunreachable codeat load or runtime, and recombine the remaining code to minimize itsmemory footprintor improve speed.[2][3] The earliest precursor to dynamic software updating isredundant systems. In a redundant environment, spare systems exist ready to take control of active computations in the event of a failure of the main system. These systems contain a main machine and ahot spare. The hot spare would be periodically seeded with acheckpointof the primary system. In the event of a failure, the hot spare would take over, and the main machine would become the new hot spare. This pattern can be generalized to updating. In the event of an update, the hot spare would activate, the main system would update, and then the updated system would resume control. The earliest true Dynamic Software Updating system isDYMOS(DynamicModificationSystem).[4]Presented in 1983 in the PhD dissertation of Insup Lee, DYMOS was a fully integrated system that had access to an interactive user interface, a compiler and runtime for aModulavariant, and source code. This enabled DYMOS to type-check updates against the existing program. DSU systems must load new code into a running program, and transform existing state into a format that is understood by the new code. Since many motivational use-cases of DSU are time-critical (for example, deploying a security fix on a live and vulnerable system), DSU systems must provide adequateupdate availability. Some DSU systems also attempt to ensure that updates are safe before applying them. There is no one canonical solution to any of these problems. Typically, a DSU system that performs well in one problem area does so at a trade-off to others. For example, empirical testing of dynamic updates indicates that increasing the number of update points results in an increased number of unsafe updates.[5] Most DSU systems usesubroutinesas the unit of code for updates; however, newer DSU systems implement whole-program updates.[6][7] If the target program is implemented in avirtual machinelanguage, the VM can use existing infrastructure to load new code, since modern virtual machines support runtime loading for other use cases besides DSU (mainlydebugging). TheHotSpotJVMsupports runtime code loading, and DSU systems targetingJava (programming language)can utilize this feature. In native languages such asCorC++, DSU systems can use specialized compilers that insert indirection into the program. At update time, this indirection is updated to point to the newest version. If a DSU system does not use a compiler to insert these indirections statically, it insert them at runtime withbinary rewriting. Binary rewriting is the process of writing low-level code into the memory image of a running native program to re-direct functions. While this requires no static analysis of a program, it is highly platform-dependent. Ekiden and Kitsuneload new program code via starting an entirely new program, either throughfork-execordynamic loading. The existing program state is then transferred to the new program space.[6][7] During an update, program state must be transformed from the original representation to the new version's representation. This is referred to asstate transformation. A function which transforms a state object or group of objects is referred to as atransformer functionorstate transformer. DSU systems can either attempt to synthesize transformer functions, or require that the developer manually supply them. Some systems mix these approaches, inferring some elements of transformers while requiring developer input on others. These transformer functions can either be applied to program state lazily, as each piece of old-version state is accessed, or eagerly, transforming all state at update time. Lazy transformation ensures that the update will complete in constant time, but also incurs steady-state overhead on object access. Eager transformation incurs more expense at the time of update, requiring the system tostop the worldwhile all transformers run. However, eager transformation allows compilers to fully optimize state access, avoiding the steady-state overhead involved with lazy transformation. Most DSU systems attempt to show some safety properties for updates. The most common variant of safety checking is type safety, where an update is considered safe if it does not result in any new code operating on an old state representation, or vice versa. Type safety is typically checked by showing one of two properties,activeness safetyorcons-freeness safety. A program is considered activeness-safe if no updated function exists on thecall stackat update time. This proves safety because control can never return to old code that would access new representations of data. Cons-Freenessis another way to prove type safety, where a section of code is considered safe if it does not access state of a given type in a way that requires knowledge of the type representation. This code can be said to not access the stateconcretely, while it may access the stateabstractly. It is possible to prove or disprovecons-freenessfor all types in any section of code, and the DSU system Ginseng uses this to prove type safety.[8][9]If a function is provencons-free, it can be updated even if it is live on the stack, since it will not cause a type error by accessing state using the old representation. Empirical analysis ofcons-freenessand activeness safety by Hayden et al. show that both techniques permit most correct updates and deny most erroneous updates. However, manually selecting update points results in zero update errors, and still allows frequent update availability.[5] DYMOS is notable in that it was the earliest proposed DSU system. DYMOS consists of a fully integrated environment for programs written in a derivative ofModula, giving the system access to a command interpreter, source code, compiler, and runtime environment, similar to aREPL. In DYMOS, updates are initiated by a user executing a command in the interactive environment. This command includes directives specifying when an update can occur, calledwhen-conditions. The information available to DYMOS enables it to enforce type-safety of updates with respect to the running target program.[4] Kspliceis a DSU system that targets only theLinux kernel, making itself one of the specialized DSU systems that support anoperating system kernelas the target program. Ksplice uses source-leveldiffsto determine changes between current and updated versions of the Linux kernel, and then uses binary rewriting to insert the changes into the running kernel.[10]Ksplice was maintained by a commercial venture founded by its original authors, Ksplice Inc., which was acquired byOracle Corporationin July 2011.[11]Ksplice is used on a commercial basis and exclusively in theOracle Linuxdistribution.[12] SUSEdevelopedkGraftas an open-source alternative for live kernel patching, andRed Hatdid likewise withkpatch. They both allow function-level changes to be applied to a running Linux kernel, while relying on live patching mechanisms established byftrace. The primary difference between kGraft and kpatch is the way they ensure runtime consistency of the updated code sections whilehot patchesare applied. kGraft and kpatch were submitted for inclusion into theLinux kernel mainlinein April 2014 and May 2014, respectively,[13][14]and the minimalistic foundations for live patching were merged into the Linux kernel mainline in kernel version 4.0, which was released on April 12, 2015.[15] Since April 2015, there is ongoing work on porting kpatch and kGraft to the common live patching core provided by the Linux kernel mainline. However, implementation of the function-level consistency mechanisms, required for safe transitions between the original and patched versions of functions, has been delayed because thecall stacksprovided by the Linux kernel may be unreliable in situations that involveassembly codewithout properstack frames; as a result, the porting work remains in progress as of September 2015[update]. In an attempt to improve the reliability of kernel's call stacks, a specialized sanity-checkstacktooluserspace utility has also been developed with the purpose of checking kernel's compile-timeobject filesand ensuring that the call stack is always maintained; it also opens up a possibility for achieving more reliable call stacks as part of thekernel oopsmessages.[16][17] Ginseng is a general-purpose DSU system. It is the only DSU system to use thecons-freenesssafety technique, allowing it to update functions that are live on the stack as long as they do not make concrete accesses to updated types. Ginseng is implemented as asource-to-source compilerwritten using theC Intermediate Languageframework inOCaml. This compiler inserts indirection to all function calls and type accesses, enabling Ginseng to lazily transform state at the cost of imposing a constant-time overhead for the entirety of the program execution.[9]Ginseng's compiler proves thecons-freenessproperties of the entire initial program and of dynamic patches. Later versions of Ginseng also support a notion of transactional safety. This allows developers to annotate a sequence of function calls as a logical unit, preventing updates from violating program semantics in ways that are not detectable by either activeness safety orcons-freenesssafety. For example, in two versions ofOpenSSHexamined by Ginseng's authors, important user verification code was moved between two functions called in sequence. If the first version of the first function executed, an update occurred, and the new version of the second function was executed, then the verification would never be performed. Marking this section as a transaction ensures that an update will not prevent the verification from occurring.[18] UpStare is a DSU system that uses a unique updating mechanism,stack reconstruction. To update a program with UpStare, a developer specifies a mapping between any possible stack frames. UpStare is able to use this mapping to immediately update the program at any point, with any number of threads, and with any functions live on the stack.[19] PoLUS is a binary-rewriting DSU system forC. It is able to update unmodified programs at any point in their execution. To update functions, it rewrites the prelude to a target function to redirect to a new function, chaining these redirections over multiple versions. This avoids steady-state overhead in functions that have not been updated.[20] Katana is a research system that provides limited dynamic updating (similar to Ksplice and its forks) for user-modeELFbinaries. The Katana patching model operates on the level of ELF objects, and thus has the capacity to be language-agnostic as long as the compilation target is ELF. Ekiden and Kitsune are two variants of a single DSU system that implements the state-transfer style of DSU for programs written inC. Rather than updating functions within a single program, Ekiden and Kitsune perform updates over whole programs, transferring necessary state between the two executions. While Ekiden accomplishes this by starting a new program using theUNIXidiom offork-exec, serializing the target program's state, and transferring it, Kitsune usesdynamic linkingto perform "in-place" state transfer. Kitsune is derived from Ekiden's codebase, and can be considered a later version of Ekiden. Ekiden and Kitsune are also notable in that they are implemented primarily as application-level libraries, rather than specialized runtimes or compilers. As such, to use Ekiden or Kitsune, an application developer must manually mark state that is to be transferred, and manually select points in the program where an update can occur. To ease this process, Kitsune includes a specialized compiler that implements adomain-specific languagefor writing state transformers.[6][7] Erlangsupports Dynamic Software Updating, though this is commonly referred to as "hot code loading". Erlang requires no safety guarantees on updates, but Erlang culture suggests that developers write in a defensive style that will gracefully handle type errors generated by updating.[citation needed] Pymoult is a prototyping platform for dynamic update written in Python. It gathers many techniques from other systems, allowing their combination and configuration. The objective of this platform is to let developers chose the update techniques they find more appropriate for their needs. For example, one can combine lazy update of the state as in Ginseng while changing the whole code of the application as in Kitsune or Ekiden.[21][22] Microsoft is utilizing internal patching technology for Microsoft Visual C++ that supports patching individual C++ functions while maintaining functional correctness of patches. Currently known applications is SQL Server in Azure SQL Database.[23]
https://en.wikipedia.org/wiki/Dynamic_software_updating
Insoftware engineering,couplingis the degree of interdependence between softwaremodules, a measure of how closely connected two routines or modules are,[1]and the strength of the relationships between modules.[2]Coupling is not binary but multi-dimensional.[3] Coupling is usually contrasted withcohesion.Low couplingoften correlates with high cohesion, and vice versa. Low coupling is often thought to be a sign of a well-structuredcomputer systemand a good design, and when combined with high cohesion, supports the general goals of highreadabilityandmaintainability.[citation needed] Thesoftware quality metricsof coupling and cohesion were invented byLarry Constantinein the late 1960s as part of astructured design, based on characteristics of “good” programming practices that reduced maintenance and modification costs. Structured design, including cohesion and coupling, were published in the articleStevens, Myers & Constantine(1974)[4]and the bookYourdon & Constantine(1979),[5]and the latter subsequently became standard terms. Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some types of coupling, in order of highest to lowest coupling, are as follows: A module here refers to a subroutine of any kind, i.e. a set of one or more statements having a name and preferably its own set of variable names. In recent work various other coupling concepts have been investigated and used as indicators for different modularization principles used in practice.[7] The goal of defining and measuring this type of coupling is to provide a run-time evaluation of a software system. It has been argued that static coupling metrics lose precision when dealing with an intensive use of dynamic binding or inheritance.[8]In the attempt to solve this issue, dynamic coupling measures have been taken into account. This kind of a coupling metric considers the conceptual similarities between software entities using, for example, comments and identifiers and relying on techniques such aslatent semantic indexing(LSI). Logical coupling (or evolutionary coupling or change coupling) analysis exploits the release history of a software system to find change patterns among modules or classes: e.g., entities that are likely to be changed together or sequences of changes (a change in a class A is always followed by a change in a class B). According to Gregor Hohpe, coupling is multi-dimensional:[3] Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages: Whether loosely or tightly coupled, a system's performance is often reduced by message and parameter creation, transmission, translation (e.g. marshaling) and message interpretation (which might be a reference to a string, array or data structure), which require less overhead than creating a complicated message such as aSOAPmessage. Longer messages require more CPU and memory to produce. To optimize runtime performance, message length must be minimized and message meaning must be maximized. One approach to decreasing coupling isfunctional design, which seeks to limit the responsibilities of modules along functionality. Coupling increases between two classesAandBif: Low coupling refers to a relationship in which one module interacts with another module through a simple and stable interface and does not need to be concerned with the other module's internal implementation (seeInformation Hiding). Systems such asCORBAorCOMallow objects to communicate with each other without having to know anything about the other object's implementation. Both of these systems even allow for objects to communicate with objects written in other languages. Coupling describes the degree and nature of dependency between software components, focusing on what they share (e.g., data, control flow, technology) and how tightly they are bound. It evaluates two key dimensions: strength, which measures how difficult it is to change the dependency, and scope (or visibility), which indicates how widely the dependency is exposed across modules or boundaries. Traditional coupling types typically include content coupling, common coupling, control coupling, stamp coupling, external coupling, and data coupling.[9][10][11] Connascence, introduced by Meilir Page-Jones, provides a systematic framework for analyzing and measuring coupling dependencies. It evaluates dependencies based on three dimensions: strength, which measures the effort required to refactor or modify the dependency; locality, which considers how physically or logically close dependent components are in the codebase; and degree, which measures how many components are affected by the dependency. Connascence can be categorized into static (detectable at compile-time) and dynamic (detectable at runtime) forms. Static connascence refers to compile-time dependencies, such as method signatures, while dynamic connascence refers to runtime dependencies, which can manifest in forms like connascence of timing, values, or algorithm.[9][10][11] Each coupling flavor can exhibit multiple types of connascence, a specific type, or, in rare cases, none at all, depending on how the dependency is implemented. Common types of connascence include connascence of name, type, position, and meaning. Certain coupling types naturally align with specific connascence types; for example, data coupling often involves connascence of name or type. However, not every combination of coupling and connascence is practically meaningful. Dependencies relying on parameter order in a method signature demonstrate connascence of position, which is fragile and difficult to refactor because reordering parameters breaks the interface. In contrast, connascence of name, which relies on field or parameter names, is generally more resilient to change. Connascence types themselves exhibit a natural hierarchy of strength, with connascence of name typically considered weaker than connascence of meaning.[9][10][11] Dependencies spanning module boundaries or distributed systems typically have higher coordination costs, increasing the difficulty of refactoring and propagating changes across distant boundaries. Modern practices, such as dependency injection and interface-based programming, are often employed to reduce coupling strength and improve the maintainability of dependencies.[9][10][11] While coupling identifies what is shared between components, connascence evaluates how those dependencies behave, how changes propagate, and how difficult they are to refactor. Strength, locality, and degree are interrelated; dependencies with high strength, wide scope, and spanning distant boundaries are significantly harder to refactor and maintain. Together, coupling provides a high-level overview of dependency relationships, while connascence offers a granular framework for analyzing dependency strength, locality, degree, and resilience to change, supporting the design of maintainable and robust systems.[9][10][11] Coupling andcohesionare terms which occur together very frequently. Coupling refers to the interdependencies between modules, while cohesion describes how related the functions within a single module are. Low cohesion implies that a given module performs tasks which are not very related to each other and hence can create problems as the module becomes large. Coupling in Software Engineering[12]describes a version of metrics associated with this concept. For data and control flow coupling: For global coupling: For environmental coupling: Coupling(C)=1−1di+2×ci+do+2×co+gd+2×gc+w+r{\displaystyle \mathrm {Coupling} (C)=1-{\frac {1}{d_{i}+2\times c_{i}+d_{o}+2\times c_{o}+g_{d}+2\times g_{c}+w+r}}} Coupling(C)makes the value larger the more coupled the module is. This number ranges from approximately 0.67 (low coupling) to 1.0 (highly coupled) For example, if a module has only a single input and output data parameter C=1−11+0+1+0+0+0+1+0=1−13=0.67{\displaystyle C=1-{\frac {1}{1+0+1+0+0+0+1+0}}=1-{\frac {1}{3}}=0.67} If a module has 5 input and output data parameters, an equal number of control parameters, and accesses 10 items of global data, with a fan-in of 3 and a fan-out of 4, C=1−15+2×5+5+2×5+10+0+3+4=0.98{\displaystyle C=1-{\frac {1}{5+2\times 5+5+2\times 5+10+0+3+4}}=0.98}
https://en.wikipedia.org/wiki/Dynamic_coupling_(computing)
Incomputer programming, aself-relocatingprogram is a program thatrelocatesits own address-dependent instructions and data when run, and is therefore capable of being loaded into memory at any address.[1][2]In many cases, self-relocating code is also a form ofself-modifying code. Self-relocation is similar to therelocationprocess employed by thelinker-loaderwhen a program is copied from external storage into main memory; the difference is that it is the loaded program itself rather than the loader in theoperating systemorshellthat performs the relocation. One form of self-relocation occurs when a program copies the code of its instructions from one sequence of locations to another sequence of locations within the main memory of a single computer, and then transfers processor control from the instructions found at the source locations of memory to the instructions found at the destination locations of memory. As such, the data operated upon by the algorithm of the program is the sequence of bytes which define the program. Static self-relocation typically happens atload-time(after the operating system has loaded the software and passed control to it, but still before its initialization has finished), sometimes also when changing the program's configuration at a later stage duringruntime.[3][4] As an example, self-relocation is often employed in the early stages of bootstrapping operating systems on architectures likeIBM PC compatibles, where lower-level chainboot loaders(like themaster boot record(MBR),volume boot record(VBR) and initial boot stages of operating systems such asDOS) move themselves out of place in order to load the next stage into memory. UnderCP/M, the debuggerDynamic Debugging Tool(DDT)dynamicallyrelocateditselfto the top of available memory throughpage boundary relocationin order to maximize theTransient Program Area(TPA) for programs to run in.[5][6] In 1988, the alternative command line processorZCPR3.4 for theZ-Systemintroduced so calledtype-4programs which were self-relocatable through an embedded stub as well.[7][8][9][10][11] UnderDOS, self-relocation is sometimes also used by more advanceddriversandresident system extensions(RSXs) orterminate-and-stay-resident programs(TSRs) loading themselves "high" intoupper memorymore effectively than possible for externally provided "high"-loaders (likeLOADHIGH/HILOAD,INSTALLHIGH/HIINSTALLorDEVICEHIGH/HIDEVICEetc.[12]since DOS 5) in order to maximize the memory available for applications. This is down to the fact that the operating system has no knowledge of the inner workings of a driver to be loaded and thus has to load it into a free memory area large enough to hold the whole driver as a block including its initialization code, even if that would be freed after the initialization. For TSRs, the operating system also has to allocate aProgram Segment Prefix(PSP) and anenvironment segment.[13]This might cause the driver not to be loaded into the most suitable free memory area or even prevent it from being loaded high at all. In contrast to this, a self-relocating driver can be loaded anywhere (including intoconventional memory) and then relocate only its (typically much smaller) resident portion into a suitable free memory area in upper memory. In addition, advanced self-relocating TSRs (even if already loaded into upper memory by the operating system) can relocate over most of their own PSP segment and command line buffer and free their environment segment in order to further reduce the resultingmemory footprintand avoidfragmentation.[14]Some self-relocating TSRs can also dynamically change their "nature" and morph into device drivers even if originally loaded as TSRs, thereby typically also freeing some memory.[4]Finally, it is technically impossible for an external loader to relocate drivers intoexpanded memory(EMS), thehigh memory area(HMA) orextended memory(viaDPMSorCLOAKING), because these methods require small driver-specificstubsto remain in conventional or upper memory in order to coordinate the access to the relocation target area,[15][nb 1][nb 2]and in the case of device drivers also because the driver's header must always remain in the first megabyte.[15][13]In order to achieve this, the drivers must be specially designed to support self-relocation into these areas.[15] Some advanced DOS drivers also contain both a device driver (which would be loaded at offset +0000h by the operating system) and TSR (loaded at offset +0100h) sharing a common code portion internally asfat binary.[13]If the shared code is not designed to beposition-independent, it requires some form of internal address fix-up similar to what would otherwise have been carried out by arelocating loaderalready; this is similar to the fix-up stage of self-relocation but with the code already being loaded at the target location by the operating system's loader (instead of done by the driver itself). IBMDOS/360did not have the ability to relocate programs during loading. Sometimes multiple versions of a program were maintained, each built for a different load address (partition). A special class of programs, called self-relocating programs, were coded to relocate themselves after loading.[16]IBMOS/360relocated executable programs when they were loaded into memory. Only one copy of the program was required, but once loaded the program could not be moved (so calledone-time position-independent code). As an extreme example of (many-time) self-relocation, also called dynamic self-relocation, it is possible to construct a computer program so that it does not stay at a fixed address in memory, even as it executes, as for example used inworm memory tests.[17][18][19][20]TheApple Wormis a dynamic self-relocator as well.[21]
https://en.wikipedia.org/wiki/Self-relocation
Cruftis ajargonword for anything that is left over, redundant and getting in the way. It is used particularly for defective, superseded, useless, superfluous, or dysfunctional elements incomputer software. Around 1958, the term was used in the sense of "garbage" by students frequenting theTech Model Railroad Club(TMRC) at theMassachusetts Institute of Technology(MIT).[1]In the 1959 edition of the club's dictionary, it was defined as "that which magically amounds in the Clubroom just before you walk in to clean up. In other words, rubbage".[2]Its authorPeter Samsonlater explained that this was meant in the sense of "detritus, that which needs to be swept up and thrown out. The dictionary has no definition for 'crufty,' a word I didn't hear until some years later".[2]In 2008 it was also used to refer to alumni who remain socially active at MIT.[3] The origin of the term is uncertain, but it may be derived fromHarvard University's Cruft Laboratory. Built in 1915 as a gift from a donor named Harriet Otis Cruft,[4]it housed the Harvard Physics Department's radar lab duringWorld War II. TheFreeBSDhandbook uses the term to refer to leftover or supersededobject codethat accumulates in a folder or directory when software isrecompiledand newexecutablesand data files are produced.[5]Such cruft, if required for the new executables to work properly, can cause theBSDequivalent ofdependency hell.[6]The word is also used to describe instances of unnecessary, leftover or just poorly writtensource codein a computer program that is then uselessly, or even harmfully, compiled into object code.[7] Cruft accumulation may result intechnical debt, which can subsequently make adding new features or modifying existing features—even to improve performance—more difficult and time-consuming. In the context ofInternetorWebaddresses (Uniform Resource Locatorsor "URLs"), cruft refers to thecharactersthat are relevant or meaningful only to the people who created the site, such as implementation details of the computer system which serves the page. Examples of URL cruft could includefilename extensionssuch as.phpor.html, and internal organizational details such as/public/or/Users/john/work/drafts/.[8] Cruft may also refer to unused and out-of-date computer paraphernalia, collected through upgrading, inheritance, or simple acquisition, both deliberate and through circumstance.[9]This accumulated hardware, however, often has benefit when IT systems administrators, technicians, and the like have need for critical replacement parts. An unused machine or component similar to a production unit could allow near-immediate restoration of the failed unit, as opposed to waiting for a shipped replacement.
https://en.wikipedia.org/wiki/Software_cruft
Incomputing,tree shakingis adead code eliminationtechnique that is applied when optimizing code.[1]Often contrasted with traditional single-library dead code elimination techniques common to minifiers, tree shaking eliminates unused functions from across the bundle by starting at the entry point and only including functions that may be executed.[2][3]It is succinctly described as "live code inclusion". Dead code elimination indynamic languagesis a much harder problem than in static languages. The idea of a "treeshaker" originated inLISP[4]in the 1990s. The idea is that all possible execution flows of a program can be represented as a tree of function calls, so that functions that are never called can be eliminated. The algorithm was applied toJavaScriptinGoogle Closure Toolsand then toDartin the dart2js compiler also written byGoogle, presented by Bob Nystrom in 2012[5][3]and described by the bookDart in Actionby author Chris Buckett in 2013: When code is converted from Dart to JavaScript the compiler does 'tree shaking'. In JavaScript you have to add an entire library even if you only need it for one function, but thanks to tree shaking the Dart-derived JavaScript only includes the individual functions that you need from a library The next wave of popularity of the term is attributed to Rich Harris's Rollup project[6]developed in 2015. The popularity of tree shaking in JavaScript is based on the fact that in contrast to CommonJS modules, ECMAScript 6 module loading is static and thus the whole dependency tree can be deduced by statically parsing the syntax tree. Thus tree shaking becomes an easy problem. However, tree shaking does not only apply at the import/export level: it can also work at the statement level, depending on the implementation.[citation needed]
https://en.wikipedia.org/wiki/Tree_shaking
Anobject code optimizer, sometimes also known as apost pass optimizeror, for small sections of code,peephole optimizer, forms part of a softwarecompiler. It takes the output from the source language compile step - the object code orbinary file- and tries to replace identifiable sections of the code with replacement code that is morealgorithmically efficient(usually improved speed). The main advantage of re-optimizing existing programs was that the stock of already compiled customer programs (object code) could be improved almost instantly with minimal effort, reducingCPUresources at a fixed cost (the price of theproprietary software). A disadvantage was that new releases of COBOL, for example, would require (charged) maintenance to the optimizer to cater for possibly changed internal COBOL algorithms. However, since new releases of COBOL compilers frequently coincided withhardwareupgrades, the faster hardware would usually more than compensate for the application programs reverting to their pre-optimized versions (until a supporting optimizer was released). Some binary optimizers doexecutable compression, which reduces thesizeof binary files using genericdata compressiontechniques, reducing storage requirements and transfer and loading times, but not improving run-time performance. Actual consolidation of duplicate library modules would also reduce memory requirements. Some binary optimizers utilizerun-timemetrics (profiling) to introspectively improve performance using techniques similar toJITcompilers. More recently developed "binary optimizers" for various platforms, some claimingnoveltybut, nevertheless, essentially using the same (or similar) techniques described above, include:
https://en.wikipedia.org/wiki/Post-pass_optimization
Incomputer programming,profile-guided optimization(PGO, sometimes pronounced aspogo[1]), also known asprofile-directed feedback(PDF)[2]orfeedback-directed optimization(FDO),[3]is thecompiler optimizationtechnique of using prior analyses of software artifacts or behaviors ("profiling") to improve the expectedruntime performanceof the program. Optimization techniques based onstatic program analysisof thesource codeconsider code performance improvements without actually executing the program. Nodynamic program analysisis performed. For example, inferring or placing formal constraints on the number of iterations aloopis likely to execute is fundamentally useful when considering whether tounrollit or not, but such facts typically rely on complex runtime factors that are difficult to conclusively establish. Usually, static analysis will have incomplete information and only be able to approximate estimates of the eventual runtime conditions. The first high-level compiler, introduced as the Fortran Automatic Coding System in 1957, broke the code into blocks and devised a table of the frequency each block is executed via a simulated execution of the code in aMonte Carlofashion in which the outcome of conditional transfers (as viaIF-type statements) is determined by arandom number generatorsuitably weighted by whateverFREQUENCYstatements were provided by the programmer.[4] Rather than programmer-supplied frequency information, profile-guided optimization uses the results of profiling test runs of theinstrumented programto optimize the finalgenerated code.[5][6][7]The compiler accesses profile data from a sample run of the program across a representative input set. The results indicate which areas of the program are executed more frequently, and which areas are executed less frequently. All optimizations benefit from profile-guided feedback because they are less reliant onheuristicswhen making compilation decisions. The caveat, however, is that the sample of data fed to the program during the profiling stage must be statistically representative of the typical usage scenarios; otherwise, profile-guided feedback has the potential to harm the overall performance of the final build instead of improving it. Just-in-time compilationcan make use ofruntimeinformation to dynamically recompile parts of the executed code to generate more efficient native code. If the dynamic profile changes during execution, it can deoptimize the previous native code, and generate a new code optimized with the information from the new profile. There is support for buildingFirefoxusing PGO.[8]Even though PGO is effective, it has not been widely adopted by software projects, due to its tedious dual-compilation model.[9]It is also possible to perform PGO without instrumentation by collecting a profile usinghardware performance counters.[9]This sampling-based approach has a much lower overhead and does not require a special compilation. TheHotSpotJava virtual machine(JVM) uses profile-guided optimization to dynamically generate native code. As a consequence, a software binary is optimized for the actualloadit is receiving. If the load changes,adaptive optimizationcandynamically recompilethe running software to optimize it for the new load. This means that all software executed on the HotSpot JVM effectively make use of profile-guided optimization.[10] PGO has been adopted in theMicrosoft Windowsversion ofGoogle Chrome. PGO was enabled in the64-bitedition of Chrome starting with version 53 and version 54 for the32-bitedition.[11] Google published a paper[12]describing atoolin use for using production profiles to guide builds resulting in up to a 10% performance improvement. Examples of compilers that implement PGO are:
https://en.wikipedia.org/wiki/Profile-guided_optimization
Superoptimizationis the process where acompilerautomatically finds the optimal sequence for a loop-free sequence of instructions. Real-world compilers generally cannot produce genuinelyoptimalcode, and while most standardcompiler optimizationsonly improve code partly, a superoptimizer's goal is to find the optimal sequence, thecanonical form. Superoptimizers can be used to improve conventional optimizers by highlighting missed opportunities so a human can write additional rules. The term superoptimization was first coined byAlexia Massalinin the 1987 paperSuperoptimizer: A Look at the Smallest Program.[1]The label "program optimization" has been given to a field that does not aspire to optimize but only to improve. This misnomer forced Massalin to call her system a superoptimizer, which is actually an optimizer to find an optimal program.[2] In 1992, the GNU Superoptimizer (GSO) was developed to integrate into theGNU Compiler Collection(GCC).[3][4]Later work further developed and extended these ideas. Traditionally, superoptimizing is performed via exhaustivebrute-force searchin the space of valid instruction sequences. This is a costly method, and largely impractical for general-purpose compilers. Yet, it has been shown to be useful in optimizing performance-critical inner loops. It is also possible to use aSMT solverto approach the problem, vastly improving the search efficiency (although inputs more complex than abasic blockremains out of reach).[5] In 2001, goal-directed superoptimizing was demonstrated in the Denali project by Compaq research.[6]In 2006,answer setdeclarative programmingwas applied to superoptimization in theTotal Optimisation using Answer Set Technology(TOAST) project[7]at theUniversity of Bath.[8][9] Superoptimization can be used to automatically generate general-purposepeephole optimizers.[10] Several superoptimizers are available for free download.
https://en.wikipedia.org/wiki/Superoptimizer
Afat binary(ormultiarchitecture binary) is a computerexecutable programorlibrarywhich has been expanded (or "fattened") with code native to multipleinstruction setswhich can consequently be run on multiple processor types.[1]This results in a file larger than a normal one-architecture binary file, thus the name. The usual method of implementation is to include a version of themachine codefor each instruction set, preceded by a singleentry pointwith code compatible with all operating systems, which executes a jump to the appropriate section. Alternative implementations store different executables in differentforks, each with its own entry point that is directly used by the operating system. The use of fat binaries is not common inoperating systemsoftware; there are several alternatives to solve the same problem, such as the use of aninstallerprogram to choose an architecture-specific binary at install time (such as withAndroidmultipleAPKs), selecting an architecture-specific binary at runtime (such as withPlan 9'sunion directoriesandGNUstep's fat bundles),[2][3]distributing software insource codeform andcompilingit in-place, or the use of avirtual machine(such as withJava) andjust-in-time compilation. In 1988,Apollo Computer'sDomain/OSSR10.1 introduced a new file type, "cmpexe" (compound executable), that bundled binaries forMotorola 680x0andApollo PRISMexecutables.[4] A fat-binary scheme smoothed theApple Macintosh's transition, beginning in 1994, from68kmicroprocessors toPowerPCmicroprocessors. Many applications for the old platform ran transparently on the new platform under an evolvingemulation scheme, but emulated code generally runs slower than native code. Applications released as "fat binaries" took up more storage space, but they ran at full speed on either platform. This was achieved by packaging both a68000-compiled version and a PowerPC-compiled version of the same program into their executable files.[5][6]The older 68K code (CFM-68K or classic 68K) continued to be stored in theresource fork, while the newer PowerPC code was contained in thedata fork, inPEFformat.[7][8][9] Fat binaries were larger than programs supporting only the PowerPC or 68k, which led to the creation of a number of utilities that would strip out the unneeded version.[5][6]In the era of smallhard drives, when 80 MB hard drives were a common size, these utilities were sometimes useful, as program code was generally a large percentage of overall drive usage, and stripping the unneeded members of a fat binary would free up a significant amount of space on a hard drive. Fat binaries were a feature ofNeXT'sNeXTSTEP/OPENSTEPoperating system, starting with NeXTSTEP 3.1. In NeXTSTEP, they were called "Multi-Architecture Binaries". Multi-Architecture Binaries were originally intended to allow software to be compiled to run both on NeXT's Motorola 68k-based hardware and on IntelIA-32-basedPCsrunning NeXTSTEP, with a single binary file for both platforms.[10]It was later used to allow OPENSTEP applications to run on PCs and the variousRISCplatforms OPENSTEP supported. Multi-Architecture Binary files are in a special archive format, in which a single file stores one or moreMach-Osubfiles for each architecture supported by the Multi-Architecture Binary. Every Multi-Architecture Binary starts with a structure (struct fat_header) containing two unsigned integers. The first integer ("magic") is used as amagic numberto identify this file as a Fat Binary. The second integer (nfat_arch) defines how many Mach-O Files the archive contains (how many instances of the same program for different architectures). After this header, there arenfat_archnumber of fat_arch structures (struct fat_arch). This structure defines the offset (from the start of the file) at which to find the file, the alignment, the size and the CPU type and subtype which the Mach-O binary (within the archive) is targeted at. The version of theGNU Compiler Collectionshipped with the Developer Tools was able tocross-compilesource code for the different architectures on whichNeXTStepwas able to run. For example, it was possible to choose the target architectures with multiple '-arch' options (with the architecture as argument). This was a convenient way to distribute a program for NeXTStep running on different architectures. It was also possible to create libraries (e.g. using NeXTStep'slibtool) with different targeted object files. Apple Computer acquired NeXT in 1996 and continued to work with the OPENSTEP code. Mach-O became the native object file format in Apple's freeDarwin operating system(2000) and Apple'sMac OS X(2001), and NeXT's Multi-Architecture Binaries continued to be supported by the operating system. Under Mac OS X, Multi-Architecture Binaries can be used to support multiple variants of an architecture, for instance to have different versions of32-bitcode optimized for thePowerPC G3,PowerPC G4, andPowerPC 970generations of processors. It can also be used to support multiple architectures, such as 32-bit and64-bitPowerPC, or PowerPC andx86, orx86-64andARM64.[11] In 2005, Apple announced anothertransition, from PowerPC processors to Intel x86 processors. Apple promoted the distribution of new applications that support both PowerPC and x86 natively by using executable files in Multi-Architecture Binary format.[12]Apple calls such programs "Universal applications" and calls the file format "Universal binary" as perhaps a way to distinguish this new transition from the previous transition, or other uses of Multi-Architecture Binary format. Universal binary format was not necessary for forward migration of pre-existing native PowerPC applications; from 2006 to 2011, Apple suppliedRosetta, a PowerPC (PPC)-to-x86dynamic binary translator, to play this role. However, Rosetta had a fairly steep performance overhead, so developers were encouraged to offer both PPC and Intel binaries, using Universal binaries. The obvious cost of Universal binary is that every installed executable file is larger, but in the years since the release of the PPC, hard-drive space has greatly outstripped executable size; while a Universal binary might be double the size of a single-platform version of the same application, free-space resources generally dwarf the code size, which becomes a minor issue. In fact, often a Universal-binary application will be smaller than two single-architecture applications because program resources can be shared rather than duplicated. If not all of the architectures are required, thelipoanddittocommand-line applications can be used to remove versions from the Multi-Architecture Binary image, thereby creating what is sometimes called athin binary. In addition, Multi-Architecture Binary executables can contain code for both 32-bit and 64-bit versions of PowerPC and x86, allowing applications to be shipped in a form that supports 32-bit processors but that makes use of the larger address space and wider data paths when run on 64-bit processors. In versions of theXcodedevelopment environment from 2.1 through 3.2 (running onMac OS X 10.4throughMac OS X 10.6), Apple included utilities which allowed applications to be targeted for both Intel and PowerPC architecture; universal binaries could eventually contain up to four versions of the executable code (32-bit PowerPC, 32-bit x86, 64-bit PowerPC, and64-bit x86). However, PowerPC support was removed from Xcode 4.0 and is therefore not available to developers runningMac OS X 10.7or greater. In 2020, Apple announced anothertransition, this time from Intel x86 processors toApple silicon(ARM64 architecture). To smooth the transition Apple added support for theUniversal 2 binaryformat; Universal 2 binary files are Multi-Architecture Binary files containing both x86-64 and ARM64 executable code, allowing the binary to run natively on both 64-bit Intel and 64-bit Apple silicon. Additionally, Apple introducedRosetta 2dynamic binary translation for x86 to Arm64 instruction set to allow users to run applications that do not have Universal binary variants. In 2006, Apple switched fromPowerPCtoIntelCPUs, and replacedOpen FirmwarewithEFI. However, by 2008, some of their Macs used 32-bit EFI and some used 64-bit EFI. For this reason, Apple extended the EFI specification with "fat" binaries that contained both 32-bit and 64-bit EFI binaries.[13] CP/M-80,MP/M-80,Concurrent CP/M,CP/M Plus,Personal CP/M-80,SCPandMSX-DOSexecutables for theIntel 8080(andZilogZ80) processor families use the same.COMfile extensionasDOS-compatible operating systems forIntel 8086binaries.[nb 1]In both cases programs are loaded at offset +100h and executed by jumping to the first byte in the file.[14][15]As theopcodesof the two processor families are not compatible, attempting to start a program under the wrong operating system leads to incorrect and unpredictable behaviour. In order to avoid this, some methods have been devised to build fat binaries which contain both a CP/M-80 and a DOS program, preceded by initial code which is interpreted correctly on both platforms.[15]The methods either combine two fully functional programs each built for their corresponding environment, or addstubswhich cause the program toexit gracefullyif started on the wrong processor. For this to work, the first few instructions (sometimes also calledgadget headers[16]) in the .COM file have to be valid code for both 8086 and 8080 processors, which would cause the processors to branch into different locations within the code.[16]For example, the utilities in Simeon Cran's emulator MyZ80 start with the opcode sequenceEBh, 52h, EBh.[17][18]An 8086 sees this as a jump and reads its next instruction from offset +154h whereas an 8080 or compatible processor goes straight through and reads its next instruction from +103h. A similar sequence used for this purpose isEBh, 03h, C3h.[19][20]John C. Elliott's FATBIN[21][22][23]is a utility to combine a CP/M-80 and a DOS .COM file into one executable.[17][24]His derivative of the originalPMsfxmodifies archives created by Yoshihiko Mino'sPMarcto beself-extractableunderboth, CP/M-80 and DOS, starting withEBh, 18h, 2Dh, 70h, 6Dh, 73h, 2Dhto also include the "-pms-" signature for self-extractingPMAarchives,[25][17][24][18]thereby also representing a form ofexecutable ASCII code. Another method to keep a DOS-compatible operating system from erroneously executing .COM programs for CP/M-80 and MSX-DOS machines[15]is to start the 8080 code withC3h, 03h, 01h, which is decoded as a "RET" instruction by x86 processors, thereby gracefully exiting the program,[nb 2]while it will be decoded as "JP 103h" instruction by 8080 processors and simply jump to the next instruction in the program. Similar, the CP/M assembler Z80ASM+ by SLR Systems would display an error message when erroneously run on DOS.[17] SomeCP/M-80 3.0.COM files may have one or moreRSXoverlays attached to them byGENCOM.[26]If so, they start with an extra256-byteheader (onepage). In order to indicate this, the first byte in the header is set tomagic byteC9h, which works both as a signature identifying this type of COM file to the CP/M 3.0executable loader, as well as a "RET" instruction for 8080-compatible processors which leads to a graceful exit if the file is executed under older versions of CP/M-80.[nb 2] C9his never appropriate as the first byte of a program for any x86 processor (it has different meanings for different generations,[nb 3]but is never a meaningful first byte); the executable loader in some versions of DOS rejects COM files that start withC9h, avoiding incorrect operation. Similaroverlappingcode sequences have also been devised for combined Z80/6502,[17]8086/68000[17]or x86/MIPS/ARMbinaries.[16] CP/M-86and DOS do not share a common file extension for executables.[nb 1]Thus, it is not normally possible to confuse executables. However, early versions of DOS had so much in common with CP/M in terms of its architecture that some early DOS programs were developed to share binaries containing executable code. One program known to do this wasWordStar 3.2x, which used identicaloverlay filesin theirportsfor CP/M-86 andMS-DOS,[27]and used dynamically fixed-up code to adapt to the differing calling conventions of these operating systems atruntime.[27] Digital Research'sGSXfor CP/M-86 and DOS also shares binary identical 16-bit drivers.[28] DOSdevice drivers(typically with file extension.SYS) start with a file header whose first four bytes areFFFFFFFFhby convention, although this is not a requirement.[29]This is fixed up dynamically by the operating system when the driverloads(typically in theDOS BIOSwhen it executesDEVICEstatements inCONFIG.SYS). Since DOS does not reject files with a .COM extension to be loaded per DEVICE and does not test for FFFFFFFFh, it is possible to combine a COM program and a device driver into the same file[30][29]by placing a jump instruction to the entry point of the embedded COM program within the first four bytes of the file (three bytes are usually sufficient).[29]If the embedded program and the device driver sections share a common portion of code, or data, it is necessary for the code to deal with being loaded at offset +0100h as a .COM style program, and at +0000h as a device driver.[30]For shared code loaded at the "wrong" offset but not designed to beposition-independent, this requires an internal address fix-up[30]similar to what would otherwise already have been carried out by arelocating loader, except for that in this case it has to be done by the loaded program itself; this is similar to the situation withself-relocating driversbut with the program already loaded at the target location by the operating system's loader. Under DOS, some files, by convention, have file extensions which do not reflect their actual file type.[nb 4]For example,COUNTRY.SYS[31]is not a DOS device driver,[nb 5]but a binaryNLSdatabase file for use with the CONFIG.SYSCOUNTRY directiveand theNLSFUNCdriver.[31]Likewise, thePC DOSandDR-DOSsystem filesIBMBIO.COMandIBMDOS.COMare special binary images loaded bybootstrap loaders, not COM-style programs.[nb 5]Trying to load COUNTRY.SYS with a DEVICE statement or executing IBMBIO.COM or IBMDOS.COM at the command prompt will cause unpredictable results.[nb 4][nb 6] It is sometimes possible to avoid this by utilizing techniques similar to those described above. For example,DR-DOS 7.02and higher incorporate a safety feature developed by Matthias R. Paul:[32]If these files are called inappropriately, tiny embedded stubs will just display some file version information and exit gracefully.[33][32][34][31]Additionally, the message is specifically crafted to follow certain"magic" patternsrecognized by the externalNetWare& DR-DOSVERSIONfile identification utility.[31][32][nb 7] A similar protection feature was the 8080 instructionC7h("RST 0") at the very start of Jay Sage's and Joe Wright'sZ-Systemtype-3 and type-4 "Z3ENV" programs[35][36]as well as "Z3TXT" language overlay files,[37]which would result in awarm boot(instead of a crash) under CP/M-80 if loaded inappropriately.[35][36][37][nb 2] In a distantly similar fashion, many (binary)file formatsby convention include a1Ahbyte (ASCII^Z) near the beginning of the file. Thiscontrol characterwill be interpreted as "soft"end-of-file(EOF) marker when a file is opened in non-binary mode, and thus, under many operating systems (including thePDP-6monitor[38]andRT-11,VMS,TOPS-10,[39]CP/M,[40][41]DOS,[42]and Windows[43]), it prevents "binary garbage" from being displayed when a file is accidentally printed at the console. FatELF[44]was a fat binary implementation forLinuxand otherUnix-likeoperating systems. Technically, a FatELF binary was a concatenation ofELFbinaries with some meta data indicating which binary to use on what architecture.[45]Additionally to the CPU architecture abstraction (byte order,word size,CPUinstruction set, etc.), there is the advantage of binaries with support for multiple kernelABIsand versions. FatELF has several use-cases, according to developers:[44] A proof-of-conceptUbuntu 9.04image is available.[47]As of 2021[update], FatELF has not been integrated into the mainline Linux kernel.[citation needed][48][49] Although thePortable Executableformat used by Windows does not allow assigning code to platforms, it is still possible to make a loader program that dispatches based on architecture. This is because desktop versions of Windows on ARM have support for 32-bitx86emulation, making it a useful "universal" machine code target. Fatpack is a loader that demonstrates the concept: it includes a 32-bit x86 program that tries to run the executables packed into its resource sections one by one.[50] When developing Windows 11 ARM64, Microsoft introduced a new way to extend thePortable Executableformat called Arm64X.[51]An Arm64X binary contains all the content that would be in separate x64/Arm64EC and Arm64 binaries, but merged into one more efficient file on disk. Visual C++ toolset has been upgraded to support producing such binaries. And when building Arm64X binaries are technically difficult, developers can build Arm64X pure forwarder DLLs instead.[52] The following approaches are similar to fat binaries in that multiple versions of machine code of the same purpose are provided in the same file. Since 2007, some specialized compilers forheterogeneous platformsproduce code files forparallel executionon multiple types of processors, i.e. the CHI (Cfor Heterogeneous Integration) compiler from theIntelEXOCHI (Exoskeleton Sequencer) development suite extends theOpenMPpragmaconcept formultithreadingto produce fat binaries containing code sections for differentinstruction set architectures(ISAs) from which theruntimeloader can dynamically initiate the parallel execution on multiple available CPU andGPUcores in a heterogeneous system environment.[53][54] Introduced in 2006,Nvidia's parallel computing platformCUDA(Compute Unified Device Architecture) is a software to enable general-purpose computing on GPUs (GPGPU). ItsLLVM-based compilerNVCCcan createELF-based fat binaries containing so calledPTXvirtualassembly(as text) which the CUDA runtime driver can laterjust-in-time compileinto some SASS (Streaming Assembler) binary executable code for the actually present target GPU. The executables can also include so calledCUDA binaries(akacubinfiles) containing dedicated executable code sections for one or more specific GPU architectures from which the CUDA runtime can choose from at load-time.[55][56][57][58][59][60]Fat binaries are also supported byGPGPU-Sim[de], a GPUsimulatorintroduced in 2007 as well.[61][62] Multi2Sim (M2S), anOpenCLheterogeneous system simulator framework (originally only for eitherMIPSor x86 CPUs, but later extended to also supportARMCPUs and GPUs like theAMD/ATIEvergreen&Southern Islandsas well asNvidia Fermi&Keplerfamilies)[63]supports ELF-based fat binaries as well.[64][63] GNU Compiler Collection(GCC) and LLVM do not have a fat binary format, but they do have fatobject filesforlink-time optimization(LTO). Since LTO involves delaying the compilation to link-time, theobject filesmust store theintermediate representation(IR), but on the other hand machine code may need to be stored too (for speed or compatibility). An LTO object containing both IR and machine code is known as afat object.[65] Even in a program orlibraryintended for the sameinstruction set architecture, a programmer may wish to make use of some newer instruction set extensions while keeping compatibility with an older CPU. This can be achieved withfunction multi-versioning(FMV): versions of the same function are written into the program, and a piece of code decides which one to use by detecting the CPU's capabilities (such as throughCPUID).Intel C++ Compiler, GCC, and LLVM all have the ability to automatically generate multi-versioned functions.[66]This is a form ofdynamic dispatchwithout any semantic effects. Many math libraries feature hand-written assembly routines that are automatically chosen according to CPU capability. Examples includeglibc,Intel MKL, andOpenBLAS. In addition, the library loader in glibc supports loading from alternative paths for specific CPU features.[67] A similar, but byte-level granular approach originally devised by Matthias R. Paul and Axel C. Frinke is to let a small self-discarding,relaxingandrelocating loaderembedded into the executable file alongside any number of alternative binary code snippets conditionally build a size- or speed-optimized runtime image of a program or driver necessary to perform (or not perform) a particular function in a particular target environment atload-timethrough a form ofdynamic dead code elimination(DDCE).[68][69][70][71]
https://en.wikipedia.org/wiki/Function_multi-versioning
Acatch-22is aparadoxicalsituation from which an individual cannot escape because of contradictory rules or limitations.[1]The term was first used byJoseph Hellerin his 1961 novelCatch-22. Catch-22s often result from rules, regulations, or procedures that an individual is subject to, but has no control over, because to fight the rule is to accept it. Another example is a situation in which someone is in need of something that can only be had by not being in need of it (e.g. the only way to qualify for a loan is to prove to the bank that you do not need a loan). One connotation of the term is that the creators of the "catch-22" situation have created arbitrary rules in order to justify and conceal their ownabuse of power. Joseph Hellercoined the term in his 1961 novelCatch-22, which describes absurd bureaucratic constraints on soldiers inWorld War II. The term is introduced by the character Doc Daneeka, an army surgeon who invokes "Catch-22" to explain why any pilot requesting mental evaluation for insanity—hoping to be found not sane enough to fly and thereby escape dangerous missions—demonstrates his own sanity in creating the request and thus cannot be declared insane. This phrase also means a dilemma or difficult circumstance from which there is no escape because of mutually conflicting or dependent conditions.[2] "You mean there's a catch?" "Sure there's a catch,"Doc Daneekareplied. "Catch-22. Anyone who wants to get out of combat duty isn't really crazy." There was only one catch and that was Catch-22, which specified that a concern for one's own safety in the face of dangers that were real and immediate was the process of a rational mind.Orrwas crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he was sane, he had to fly them. If he flew them, he was crazy and didn't have to; but if he didn't want to, he was sane and had to.Yossarianwas moved very deeply by the absolute simplicity of this clause of Catch-22 and let out a respectful whistle. Different formulations of "Catch-22" appear throughout the novel. The term is applied to various loopholes and quirks of the military system, always with the implication that rules are inaccessible to and slanted against those lower in the hierarchy. In chapter 6, Yossarian (the protagonist) is told that Catch-22 requires him to do anything hiscommanding officertells him to do, regardless of whether these orders contradict orders from the officer's superiors.[3] In a final episode, Catch-22 is described to Yossarian by an old woman recounting an act of violence by soldiers:[4][5] "Catch-22 says they have a right to do anything we can't stop them from doing." "What the hell are you talking about?" Yossarian shouted at her in bewildered, furious protest. "How did you know it was Catch-22? Who the hell told you it was Catch-22?" "The soldiers with the hard white hats and clubs. The girls were crying. 'Did we do anything wrong?' they said. The men said no and pushed them away out the door with the ends of their clubs. 'Then why are you chasing us out?' the girls said. 'Catch-22,' the men said. All they kept saying was 'Catch-22, Catch-22.' What does it mean, Catch-22? What is Catch-22?" "Didn't they show it to you?" Yossarian demanded, stamping about in anger and distress. "Didn't you even make them read it?" "They don't have to show us Catch-22," the old woman answered. "The law says they don't have to." "What law says they don't have to?" "Catch-22." According to literature professor Ian Gregson, the old woman's narrative defines "Catch-22" more directly as the "brutal operation of power", stripping away the "bogus sophistication" of the earlier scenarios.[6] Besides referring to an unsolvable logicaldilemma, Catch-22 is invoked to explain or justify the military bureaucracy. For example, in the first chapter, it requires Yossarian to sign his name to letters he censors while he is confined to a hospital bed. One clause mentioned in chapter 10 closes a loophole in promotions, which one private had been exploiting to reattain the attractive rank ofprivate first classafter any promotion. Throughcourts-martialfor goingAWOL, he would be busted in rank back to private, but Catch-22 limited the number of times he could do this before being sent to the stockade. At another point in the book, a prostitute explains to Yossarian that she cannot marry him because he is crazy, and she will never marry a crazy man. She considers any man crazy who would marry a woman who is not a virgin. This closed logic loop clearly illustrated Catch-22 because by her logic, all men who refuse to marry her are sane and thus she would consider marriage; but as soon as a man agrees to marry her, he becomes crazy for wanting to marry a non-virgin, and is instantly rejected. At one point, Captain Black attempts to press Milo into deprivingMajor Majorof food as a consequence of not signing a loyalty oath that Major Major was never given an opportunity to sign in the first place. Captain Black asks Milo, "You're not against Catch-22, are you?" In chapter 40, Catch-22 forces Colonels Korn and Cathcart to promote Yossarian to Major and ground him rather than simply sending him home. They fear that if they do not, others will refuse to fly, just as Yossarian did. Heller originally wanted to call the phrase (and hence, the book) by other numbers, but he and his publishers eventually settled on 22. The number has no particular significance; it was chosen more or less foreuphony. The title was originallyCatch-18, but Heller changed it after the popularMila 18was published a short time beforehand.[7][8] The term "catch-22" has filtered into common usage in the English language. In a 1975 interview, Heller said the term would not translate well into other languages.[8] James E. Combs and Dan D. Nimmo suggest that the idea of a "catch-22" has gained popular currency because so many people in modern society are exposed to frustrating bureaucratic logic. They write of the rules of high school and colleges that: This bogus democracy that can be overruled by arbitrary fiat is perhaps a citizen's first encounter with organizations that may profess 'open' and libertarian values, but in fact are closed and hierarchical systems. Catch-22 is an organizational assumption, an unwritten law of informal power that exempts the organization from responsibility and accountability, and puts the individual in the absurd position of being excepted for the convenience or unknown purposes of the organization.[5] Along with George Orwell's "doublethink", "catch-22" has become one of the best-recognized ways to describe the predicament of being trapped by contradictory rules.[9] A significant type of definition ofalternative medicinehas been termed a catch-22. In a 1998 editorial co-authored byMarcia Angell, a former editor of theNew England Journal of Medicine, argued that: It is time for the scientific community to stop giving alternative medicine a free ride. There cannot be two kinds of medicine—conventional and alternative. There is only medicine that has been adequately tested and medicine that has not, medicine that works and medicine that may or may not work. Once a treatment has been tested rigorously, it no longer matters whether it was considered alternative at the outset. If it is found to be reasonably safe and effective, it will be accepted. But assertions, speculation, and testimonials do not substitute for evidence. Alternative treatments should be subjected to scientific testing no less rigorous than that required for conventional treatments.[10] This definition has been described byRobert L. Parkas a logical catch-22 which ensures that anycomplementary and alternative medicine(CAM) method which is proven to work "would no longer be CAM, it would simply be medicine."[11] U.S. Circuit JudgeDon Willettreferred toqualified immunity, which requires a violation of constitutional rights to have been previously established in order for a victim to claim damages, as a catch-22: "Section 1983 meets Catch-22. Important constitutional questions go unanswered precisely because those questions are yet unanswered. Courts then rely on that judicial silence to conclude there's no equivalent case on the books. No precedent = no clearly established law = no liability. An Escherian Stairwell. Heads government wins, tails plaintiff loses."[12][13] Thearchetypalcatch-22, as formulated byJoseph Heller, involves the case ofJohn Yossarian, aU.S. Army Air Forcesbombardier, who wishes to be grounded from combat flight. This will only happen if he is evaluated by the squadron'sflight surgeonand found "unfit to fly". "Unfit" would be any pilot who is willing to fly such dangerous missions, as one would have to bemadto volunteer for possible death. However, to be evaluated, he mustrequestthe evaluation, an act that is considered sufficient proof for being declared sane. These conditions make it impossible to be declared "unfit". The "Catch-22" is that "anyone who wants to get out of combat duty isn't really crazy".[14]Hence, pilots who request a mental fitness evaluationaresane, and therefore must fly in combat. At the same time, if an evaluation is not requested by the pilot, he will never receive one and thus can never be found insane, meaning he must also fly in combat. Therefore, Catch-22 ensures that no pilot can ever be grounded for being insane even if he is. A logical formulation of this situation is: The philosopher Laurence Goldstein argues that the "airman's dilemma" is logically not even a condition that is true under no circumstances; it is a "vacuousbiconditional" that is ultimately meaningless. Goldstein writes:[15] The catch is this: what looks like a statement of the conditions under which an airman can be excused flying dangerous missions reduces not to the statement (which could be a mean way of disguising an unpleasant truth), but to the worthlessly empty announcement If the catch were (i), that would not be so bad—an airman would at least be able to discover that under no circumstances could he avoid combat duty. But Catch-22 is worse—a welter of words that amounts to nothing; it is without content, it conveys no information at all.
https://en.wikipedia.org/wiki/Catch-22_(logic)
Configuration management(CM) is a management process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.[1][2]The CM process is widely used by military engineering organizations to manage changes throughout thesystem lifecycleofcomplex systems, such asweaponsystems,military vehicles, andinformation systems. Outside the military, the CM process is also used with IT service management as defined byITIL, and with otherdomain modelsin the civil engineering and otherindustrial engineeringsegments such as roads, bridges,canals, dams, and buildings.[3][4][5] CM applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes. CM verifies that a system performs as intended, and is identified and documented in sufficient detail to support its projected life cycle. The CM process facilitates orderly management of system information and system changes for such beneficial purposes as to revise capability; improve performance, reliability, or maintainability; extend life; reduce cost; reduce risk and liability; or correct defects. The relatively minimal cost of implementing CM is returned manyfold in cost avoidance. The lack of CM, or its ineffectual implementation, can be very expensive and sometimes can have such catastrophic consequences such as failure of equipment or loss of life. CM emphasizes the functional relation between parts, subsystems, and systems for effectively controlling system change. It helps to verify that proposed changes are systematically considered to minimize adverse effects. Changes to the system are proposed, evaluated, and implemented using a standardized, systematic approach that ensures consistency, and proposed changes are evaluated in terms of their anticipated impact on the entire system. CM verifies that changes are carried out as prescribed and that documentation of items and systems reflects their true configuration. A complete CM program includes provisions for the storing, tracking, and updating of all system information on a component, subsystem, and system basis.[6] A structured CM program ensures that documentation (e.g., requirements, design, test, and acceptance documentation) for items is accurate and consistent with the actual physical design of the item. In many cases, without CM, the documentation exists but is not consistent with the item itself. For this reason, engineers, contractors, and management are frequently forced to develop documentation reflecting the actual status of the item before they can proceed with a change. Thisreverse engineeringprocess is wasteful in terms of human and other resources and can be minimized or eliminated using CM. Configuration Management originated in theUnited States Department of Defensein the 1950s as a technical management discipline for hardware material items—and it is now a standard practice in virtually every industry. The CM process became its own technical discipline sometime in the late 1960s when the DoD developed a series ofmilitary standardscalled the "480 series" (i.e., MIL-STD-480, MIL-STD-481 and MIL-STD-483) that were subsequently issued in the 1970s. In 1991, the "480 series" was consolidated into a single standard known as the MIL–STD–973 that was then replaced by MIL–HDBK–61 pursuant to a general DoD goal that reduced the number of military standards in favor of industrytechnical standardssupported bystandards developing organizations(SDO).[7]This marked the beginning of what has now evolved into the most widely distributed and accepted standard on CM,ANSI–EIA–649–1998.[8]Now widely adopted by numerous organizations and agencies, the CM discipline's concepts includesystems engineering(SE),Integrated Logistics Support(ILS),Capability Maturity Model Integration(CMMI),ISO 9000,Prince2project management method,COBIT,ITIL,product lifecycle management, andApplication Lifecycle Management. Many of these functions and models have redefined CM from its traditional holistic approach to technical management. Some treat CM as being similar to a librarian activity, and break out change control or change management as a separate or stand alone discipline. CM is the practice of handling changes systematically so that asystemmaintains itsintegrityover time. CM implements the policies, procedures, techniques, and tools that manage, evaluate proposed changes, track the status of changes, and maintain an inventory of system and support documents as the system changes. CM programs and plans provide technical and administrative direction to the development and implementation of the procedures, functions, services, tools, processes, and resources required to successfully develop and support a complex system. During system development, CM allowsprogram managementto track requirements throughout the life-cycle through acceptance and operations and maintenance. As changes inevitably occur in the requirements and design, they must be approved and documented, creating an accurate record of the system status. Ideally the CM process is applied throughout thesystem lifecycle. Most professionals mix up or get confused withAsset management(AM, see alsoISO/IEC 19770), where it inventories the assets on hand. The key difference between CM and AM is that the former does not manage the financial accounting aspect but on service that the system supports or in other words, that the later (AM) is trying to realize value from an IT asset.[9][10][11] The CM process for both hardware- and software-configuration items comprises five distinct disciplines as established in the MIL–HDBK–61A[12]and in ANSI/EIA-649. Members of an organization interested in applying a standardchange-managementprocess will employ these disciplines as policies and procedures for establishingbaselines, manage and control change, and monitor and assess the effectiveness and correctness of progress. TheIEEE 12207process IEEE 12207.2 also has these activities and adds "Release management and delivery".The five disciplines are: The software configuration management (SCM) process is looked upon by practitioners as the best solution to handling changes in software projects. It identifies the functional and physical attributes of software at various points in time, and performs systematic control of changes to the identified attributes for the purpose of maintaining software integrity and traceability throughout the software development life cycle. The SCM process further defines the need to trace changes, and the ability to verify that the final delivered software has all of the planned enhancements that are supposed to be included in the release. It identifies four procedures that must be defined for each software project to ensure that a sound SCM process is implemented. They are: These terms and definitions change from standard to standard, but are essentially the same. ITILspecifies the use of a configuration management system (CMS) orconfiguration management database(CMDB) as a means of achieving industry best practices for Configuration Management. CMDBs are used to track Configuration Items (CIs) and the dependencies between them, where CIs represent the things in an enterprise that are worth tracking and managing, such as but not limited to computers, software, software licenses, racks, network devices, storage, and even the components within such items. CMS helps manage afederatedcollection of CMDBs. The benefits of a CMS/CMDB includes being able to perform functions like root cause analysis, impact analysis, change management, and current state assessment for future state strategy development. Configuration Management(CM) is an ITIL-specific ITSM process that tracks all of the individual CIs in anIT systemwhich may be as simple as a single server, or as complex as the entire IT department. In large organizations a configuration manager may be appointed to oversee and manage the CM process. In ITIL version 3, this process has been renamed asService Asset and Configuration Management. Forinformation assurance, CM can be defined as the management of security features and assurances through control of changes made to hardware, software, firmware, documentation, test, test fixtures, and test documentation throughout the life cycle of an information system.[13][better source needed]CM for information assurance, sometimes referred to assecure configuration management(SCM), relies upon performance, functional, and physical attributes of IT platforms and products and their environments to determine the appropriate security features and assurances that are used to measure a system configuration state. For example, configuration requirements may be different for anetwork firewallthat functions as part of an organization's Internet boundary versus one that functions as an internal local network firewall. Configuration management is used to maintain an understanding of the status of complex assets with a view to maintaining the highest level of serviceability for the lowest cost. Specifically, it aims to ensure that operations are not disrupted due to the asset (or parts of the asset) overrunning limits of planned lifespan or below quality levels. In the military, this type of activity is often classed as "mission readiness", and seeks to define which assets are available and for which type of mission; a classic example is whether aircraft on board an aircraft carrier are equipped with bombs for ground support or missiles for defense. Configuration management can be used to maintainOSconfiguration files.[14]Many of these systems utilizeInfrastructure as Codeto define and maintain configuration.[15] ThePromise theoryof configuration maintenance was developed byMark Burgess,[16][17][18]with a practical implementation on present day computer systems in the software CFEngine able to perform real time repair as well as preventive maintenance. Understanding the "as is" state of an asset and its major components is an essential element in preventive maintenance as used in maintenance, repair, and overhaul andenterprise asset managementsystems. Complex assets such as aircraft, ships, industrial machinery etc. depend on many different components being serviceable. This serviceability is often defined in terms of the amount of usage the component has had since it was new, since fitted, since repaired, the amount of use it has had over its life and several other limiting factors. Understanding how near the end of their life each of these components is has been a major undertaking involving labor-intensive record keeping until recent developments in software. Many types of component use electronic sensors to capture data which provides livecondition monitoring. This data is analyzed on board or at a remote location by computer to evaluate its current serviceability and increasingly its likely future state using algorithms which predict potential future failures based on previous examples of failure through field experience and modeling. This is the basis for "predictive maintenance". Availability of accurate and timely data is essential in order for CM to provide operational value and a lack of this can often be a limiting factor. Capturing and disseminating the operating data to the various support organizations is becoming an industry in itself. The consumers of this data have grown more numerous and complex with the growth of programs offered by original equipment manufacturers (OEMs). These are designed to offer operators guaranteed availability and make the picture more complex with the operator managing the asset but the OEM taking on the liability to ensure its serviceability. A number of standards support or include configuration management,[19]including: More recently[when?]configuration management has been applied to large construction projects which can often be very complex and have a huge number of details and changes that need to be documented. Construction agencies such as the Federal Highway Administration have used configuration management for their infrastructure projects.[32]There are construction-based configuration management tools that aim to document change orders and RFIs in order to ensure a project stays on schedule and on budget. These programs can also store information to aid in the maintenance and modification of the infrastructure when it is completed. One such application, CCSNet, was tested in a case study funded by the Federal Transportation Administration (FTA) in which the efficacy of configuration management was measured through comparing the approximately 80% complete construction of the Los Angeles County Metropolitan Transit Agency (LACMTA) first and second segments of the Red Line, a $5.3 billion rail construction project. This study yielded results indicating a benefit to using configuration management on projects of this nature.[33]
https://en.wikipedia.org/wiki/Configuration_management
Insoftware engineering,couplingis the degree of interdependence between softwaremodules, a measure of how closely connected two routines or modules are,[1]and the strength of the relationships between modules.[2]Coupling is not binary but multi-dimensional.[3] Coupling is usually contrasted withcohesion.Low couplingoften correlates with high cohesion, and vice versa. Low coupling is often thought to be a sign of a well-structuredcomputer systemand a good design, and when combined with high cohesion, supports the general goals of highreadabilityandmaintainability.[citation needed] Thesoftware quality metricsof coupling and cohesion were invented byLarry Constantinein the late 1960s as part of astructured design, based on characteristics of “good” programming practices that reduced maintenance and modification costs. Structured design, including cohesion and coupling, were published in the articleStevens, Myers & Constantine(1974)[4]and the bookYourdon & Constantine(1979),[5]and the latter subsequently became standard terms. Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some types of coupling, in order of highest to lowest coupling, are as follows: A module here refers to a subroutine of any kind, i.e. a set of one or more statements having a name and preferably its own set of variable names. In recent work various other coupling concepts have been investigated and used as indicators for different modularization principles used in practice.[7] The goal of defining and measuring this type of coupling is to provide a run-time evaluation of a software system. It has been argued that static coupling metrics lose precision when dealing with an intensive use of dynamic binding or inheritance.[8]In the attempt to solve this issue, dynamic coupling measures have been taken into account. This kind of a coupling metric considers the conceptual similarities between software entities using, for example, comments and identifiers and relying on techniques such aslatent semantic indexing(LSI). Logical coupling (or evolutionary coupling or change coupling) analysis exploits the release history of a software system to find change patterns among modules or classes: e.g., entities that are likely to be changed together or sequences of changes (a change in a class A is always followed by a change in a class B). According to Gregor Hohpe, coupling is multi-dimensional:[3] Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages: Whether loosely or tightly coupled, a system's performance is often reduced by message and parameter creation, transmission, translation (e.g. marshaling) and message interpretation (which might be a reference to a string, array or data structure), which require less overhead than creating a complicated message such as aSOAPmessage. Longer messages require more CPU and memory to produce. To optimize runtime performance, message length must be minimized and message meaning must be maximized. One approach to decreasing coupling isfunctional design, which seeks to limit the responsibilities of modules along functionality. Coupling increases between two classesAandBif: Low coupling refers to a relationship in which one module interacts with another module through a simple and stable interface and does not need to be concerned with the other module's internal implementation (seeInformation Hiding). Systems such asCORBAorCOMallow objects to communicate with each other without having to know anything about the other object's implementation. Both of these systems even allow for objects to communicate with objects written in other languages. Coupling describes the degree and nature of dependency between software components, focusing on what they share (e.g., data, control flow, technology) and how tightly they are bound. It evaluates two key dimensions: strength, which measures how difficult it is to change the dependency, and scope (or visibility), which indicates how widely the dependency is exposed across modules or boundaries. Traditional coupling types typically include content coupling, common coupling, control coupling, stamp coupling, external coupling, and data coupling.[9][10][11] Connascence, introduced by Meilir Page-Jones, provides a systematic framework for analyzing and measuring coupling dependencies. It evaluates dependencies based on three dimensions: strength, which measures the effort required to refactor or modify the dependency; locality, which considers how physically or logically close dependent components are in the codebase; and degree, which measures how many components are affected by the dependency. Connascence can be categorized into static (detectable at compile-time) and dynamic (detectable at runtime) forms. Static connascence refers to compile-time dependencies, such as method signatures, while dynamic connascence refers to runtime dependencies, which can manifest in forms like connascence of timing, values, or algorithm.[9][10][11] Each coupling flavor can exhibit multiple types of connascence, a specific type, or, in rare cases, none at all, depending on how the dependency is implemented. Common types of connascence include connascence of name, type, position, and meaning. Certain coupling types naturally align with specific connascence types; for example, data coupling often involves connascence of name or type. However, not every combination of coupling and connascence is practically meaningful. Dependencies relying on parameter order in a method signature demonstrate connascence of position, which is fragile and difficult to refactor because reordering parameters breaks the interface. In contrast, connascence of name, which relies on field or parameter names, is generally more resilient to change. Connascence types themselves exhibit a natural hierarchy of strength, with connascence of name typically considered weaker than connascence of meaning.[9][10][11] Dependencies spanning module boundaries or distributed systems typically have higher coordination costs, increasing the difficulty of refactoring and propagating changes across distant boundaries. Modern practices, such as dependency injection and interface-based programming, are often employed to reduce coupling strength and improve the maintainability of dependencies.[9][10][11] While coupling identifies what is shared between components, connascence evaluates how those dependencies behave, how changes propagate, and how difficult they are to refactor. Strength, locality, and degree are interrelated; dependencies with high strength, wide scope, and spanning distant boundaries are significantly harder to refactor and maintain. Together, coupling provides a high-level overview of dependency relationships, while connascence offers a granular framework for analyzing dependency strength, locality, degree, and resilience to change, supporting the design of maintainable and robust systems.[9][10][11] Coupling andcohesionare terms which occur together very frequently. Coupling refers to the interdependencies between modules, while cohesion describes how related the functions within a single module are. Low cohesion implies that a given module performs tasks which are not very related to each other and hence can create problems as the module becomes large. Coupling in Software Engineering[12]describes a version of metrics associated with this concept. For data and control flow coupling: For global coupling: For environmental coupling: Coupling(C)=1−1di+2×ci+do+2×co+gd+2×gc+w+r{\displaystyle \mathrm {Coupling} (C)=1-{\frac {1}{d_{i}+2\times c_{i}+d_{o}+2\times c_{o}+g_{d}+2\times g_{c}+w+r}}} Coupling(C)makes the value larger the more coupled the module is. This number ranges from approximately 0.67 (low coupling) to 1.0 (highly coupled) For example, if a module has only a single input and output data parameter C=1−11+0+1+0+0+0+1+0=1−13=0.67{\displaystyle C=1-{\frac {1}{1+0+1+0+0+0+1+0}}=1-{\frac {1}{3}}=0.67} If a module has 5 input and output data parameters, an equal number of control parameters, and accesses 10 items of global data, with a fan-in of 3 and a fan-out of 4, C=1−15+2×5+5+2×5+10+0+3+4=0.98{\displaystyle C=1-{\frac {1}{5+2\times 5+5+2\times 5+10+0+3+4}}=0.98}
https://en.wikipedia.org/wiki/Coupling_(computer_programming)
Incompiler theory,dead-code elimination(DCE,dead-code removal,dead-code stripping, ordead-code strip) is acompiler optimizationto removedead code(code that does not affect the program results). Removing such code has several benefits: it shrinksprogramsize, an important consideration in some contexts, it reduces resource usage such as the number of bytes to be transferred[1]and it allows the running program to avoid executing irrelevantoperations, which reduces itsrunning time. It can also enable further optimizations by simplifying program structure. Dead code includes code that can never be executed (unreachable code), and code that only affectsdead variables(written to, but never read again), that is, irrelevant to the program. Consider the following example written inC. Simple analysis of the uses of values would show that the value ofbafter the first assignment is not used insidefoo. Furthermore,bis declared as a local variable insidefoo, so its value cannot be used outsidefoo. Thus, the variablebisdeadand an optimizer can reclaim its storage space and eliminate its initialization. Furthermore, because the first return statement is executed unconditionally and there is no label after it which a "goto" could reach, no feasible execution path reaches the second assignment tob. Thus, the assignment isunreachableand can be removed. If the procedure had a more complexcontrol flow, such as a label after the return statement and agotoelsewhere in the procedure, then a feasible execution path might exist to the assignment tob. Also, even though some calculations are performed in the function, their values are not stored in locations accessible outside thescopeof this function. Furthermore, given the function returns a static value (96), it may be simplified to the value it returns (this simplification is calledconstant folding). Most advanced compilers have options to activate dead-code elimination, sometimes at varying levels. A lower level might only remove instructions that cannot be executed. A higher level might also not reserve space for unused variables. A yet higher level might determine instructions or functions that serve no purpose and eliminate them. A common use of dead-code elimination is as an alternative to optional code inclusion via apreprocessor. Consider the following code. Because the expression 0 will always evaluate tofalse, the code inside the if statement can never be executed, and dead-code elimination would remove it entirely from the optimized program. This technique is common indebuggingto optionally activate blocks of code; using an optimizer with dead-code elimination eliminates the need for using apreprocessorto perform the same task. In practice, much of the dead code that an optimizer finds is created by other transformations in the optimizer. For example, the classic techniques for operatorstrength reductioninsert new computations into the code and render the older, more expensive computations dead.[2]Subsequent dead-code elimination removes those calculations and completes the effect (without complicating the strength-reduction algorithm). Historically, dead-code elimination was performed using information derived fromdata-flow analysis.[3]An algorithm based onstatic single-assignment form(SSA) appears in the original journal article onSSAform by Ron Cytron et al.[4]Robert Shillingsburg (aka Shillner) improved on the algorithm and developed a companion algorithm for removing useless control-flow operations.[5] Dead code is normally considered deadunconditionally. Therefore, it is reasonable attempting to remove dead code through dead-code elimination atcompile time. However, in practice it is also common for code sections to represent dead or unreachable code onlyunder certain conditions, which may not be known at the time of compilation or assembly. Such conditions may be imposed by differentruntime environments(for example different versions of an operating system, or different sets and combinations of drivers or services loaded in a particular target environment), which may require different sets of special cases in the code, but at the same time become conditionally dead code for the other cases.[6][7]Also, the software (for example, a driver or resident service) may be configurable to include or exclude certain features depending on user preferences, rendering unused code portions useless in a particular scenario.[6][7]While modular software may be developed to dynamically load libraries on demand only, in most cases, it is not possible to load only the relevant routines from a particular library, and even if this would be supported, a routine may still include code sections which can be considered dead code in a given scenario, but could not be ruled out at compile time, already. The techniques used to dynamically detect demand, identify and resolve dependencies, remove such conditionally dead code, and to recombine the remaining code atloadorruntimeare calleddynamic dead-code elimination[8][9][10]ordynamic dead-instruction elimination.[11] Most programming languages, compilers and operating systems offer no or little more support thandynamic loadingof libraries andlate linking, therefore software utilizing dynamic dead-code elimination is very rare in conjunction with languagescompiled ahead-of-timeor written inassembly language.[12][13][14]However, language implementations doingjust-in-time compilationmay dynamically optimize for dead-code elimination.[10][15][16] Although with a rather different focus, similar approaches are sometimes also utilized fordynamic software updatingandhot patching.
https://en.wikipedia.org/wiki/Dynamic_dead_code_elimination
Apackage managerorpackage management systemis a collection ofsoftware toolsthat automates the process of installing, upgrading, configuring, and removingcomputer programsfor acomputerin a consistent manner.[1] A package manager deals withpackages, distributions of software and data inarchive files. Packages containmetadata, such as the software's name, description of its purpose, version number, vendor,checksum(preferably acryptographic hash function), and a list ofdependenciesnecessary for the software to run properly. Upon installation, metadata is stored in a local packagedatabase. Package managers typically maintain a database of software dependencies and version information to prevent software mismatches and missing prerequisites. They work closely withsoftware repositories,binary repository managers, andapp stores. Package managers are designed to eliminate the need for manual installs and updates. This can be particularly useful for large enterprises whose operating systems typically consist of hundreds or even tens of thousands of distinct software packages.[2] An early package manager was SMIT (and its backend installp) fromIBM AIX.SMITwas introduced with AIX 3.0 in 1989.[citation needed] Early package managers, from around 1994, had no automatic dependency resolution[3]but could already drastically simplify the process of adding and removing software from a running system.[4] By around 1995, beginning withCPAN, package managers began doing the work of downloading packages from a repository, automatically resolving its dependencies and installing them as needed, making it much easier to install, uninstall and update software from a system.[5] A software package is anarchive filecontaining a computer program as well as necessary metadata for its deployment. The computer program can be insource codethat has to be compiled and built first.[6]Package metadata include package description, package version, and dependencies (other packages that need to be installed beforehand). Package managers are charged with the task of finding, installing, maintaining or uninstalling software packages upon the user's command. Typical functions of a package management system include: Computer systems that rely ondynamic librarylinking, instead ofstatic librarylinking, share executable libraries of machine instructions across packages and applications. In these systems, conflicting relationships between different packages requiring different versions of libraries results in a challenge colloquially known as "dependency hell". OnMicrosoft Windowssystems, this is also called "DLL hell" when working with dynamically linked libraries.[7] Modern package managers have mostly solved these problems, by allowing parallel installation of multiple versions of a library (e.g.OPENSTEP'sFrameworksystem), a dependency of any kind (e.g.slotsin GentooPortage), and even of packages compiled with different compiler versions (e.g. dynamic libraries built by theGlasgow Haskell Compiler, where a stableABIdoes not exist), in order to enable other packages to specify which version they were linked or even installed against. System administratorsmay install and maintain software using tools other than package management software. For example, a local administrator maydownloadunpackaged source code, compile it, and install it. This may cause the state of the local system to fall out ofsynchronizationwith the state of the package manager'sdatabase. The local administrator will be required to take additional measures, such as manually managing some dependencies or integrating the changes into the package manager. There are tools available to ensure that locally compiled packages are integrated with the package management. For distributions based on .deb and.rpmfiles as well asSlackware Linux, there isCheckInstall, and for recipe-based systems such asGentoo Linuxand hybrid systems such asArch Linux, it is possible to write a recipe first, which then ensures that the package fits into the local package database.[citation needed] Particularly troublesome with softwareupgradesare upgrades of configuration files. Since package managers, at least on Unix systems, originated as extensions offile archiving utilities, they can usually only either overwrite or retain configuration files, rather than applying rules to them. There are exceptions to this that usually apply to kernel configuration (which, if broken, will render the computer unusable after a restart). Problems can be caused if the format of configuration files changes; for instance, if the old configuration file does not explicitly disable new options that should be disabled. Some package managers, such asDebian'sdpkg, allow configuration during installation. In other situations, it is desirable to install packages with the default configuration and then overwrite this configuration, for instance, inheadlessinstallations to a large number of computers. This kind of pre-configured installation is also supported by dpkg. To give users more control over the kinds of software that they are allowing to be installed on their system (and sometimes due to legal or convenience reasons on the distributors' side), software is often downloaded from a number ofsoftware repositories.[8] When a user interacts with the package management software to bring about an upgrade, it is customary to present the user with the list of actions to be executed (usually the list of packages to be upgraded, and possibly giving the old and new version numbers), and allow the user to either accept the upgrade in bulk, or select individual packages for upgrades. Many package managers can be configured to never upgrade certain packages, or to upgrade them only when critical vulnerabilities or instabilities are found in the previous version, as defined by the packager of the software. This process is sometimes calledversion pinning. For instance: Some of the more advanced package management features offer "cascading package removal",[10]in which all packages that depend on the target package and all packages that only the target package depends on, are also removed. Although the commands are specific for every particular package manager, they are to a large extent translatable, as most package managers offer similar functions. TheArch LinuxPacman/Rosetta wiki offers an extensive overview.[16] Package managers likedpkghave existed as early as 1994.[17] Linux distributionsoriented to binary packages rely heavily on package management systems as their primary means of managing and maintaining software. Mobile operating systems such asAndroid(Linux-based) andiOS(Unix-based) rely almost exclusively on their respective vendors'app storesand thus use their own dedicated package management systems. A package manager is often called an "install manager", which can lead to a confusion between package managers andinstallers. The differences include: Mostsoftware configuration managementsystems treat building software and deploying software as separate, independent steps. Abuild automationutility typically takes human-readablesource codefiles already on a computer, and automates the process of converting them into a binary executable package on the same or remote computer. Later a package manager typically running on some other computer downloads those pre-built binary executable packages over the internet and installs them. However, both kinds of tools have many commonalities: A few tools, such asMaakandA-A-P, are designed to handle both building and deployment, and can be used as either a build automation utility or as a package manager or both.[18] App storescan also be considered application-level package managers (without the ability to install all levels of programs[19][20]). Unlike traditional package managers, app stores are designed to enable payment for the software itself (instead of for software development), and may only offer monolithic packages with no dependencies or dependency resolution.[21][20]They are usually extremely limited in their management functionality, due to a strong focus on simplification over power oremergence, and common in commercial operating systems and locked-down “smart” devices. Package managers also often have only human-reviewed code. Many app stores, such as Google Play and Apple's App Store, screen apps mostly using automated tools only; malware withdefeat devicescan pass these tests, by detecting when the software is being automatically tested and delaying malicious activity.[22][23][24]There are, however, exceptions; thenpmpackage database, for instance, relies entirely onpost-publication reviewof its code,[25][26]while theDebianpackage database has an extensive human review process before any package goes into the main stable database. TheXZ Utils backdoorused years of trust-building to insert a backdoor, which was nonetheless caught while in the testing database. Also known asbinary repository manager, it is a software tool designed to optimize the download and storage of binary files, artifacts and packages used and produced in thesoftware development process.[27]These package managers aim to standardize the way enterprises treat all package types. They give users the ability to apply security and compliance metrics across all artifact types. Universal package managers have been referred to as being at the center of aDevOps toolchain.[28] Each package manager relies on the format and metadata of the packages it can manage. That is, package managers need groups of files to be bundled for the specific package manager along with appropriate metadata, such as dependencies. Often, a core set of utilities manages the basic installation from these packages and multiple package managers use these utilities to provide additional functionality. For example,yumrelies onrpmas abackend. Yum extends the functionality of the backend by adding features such as simple configuration for maintaining a network of systems. As another example, theSynaptic Package Managerprovides a graphical user interface by using theAdvanced Packaging Tool (apt)library, which, in turn, relies ondpkgfor core functionality. Alienis a program that converts between differentLinux package formats, supporting conversion betweenLinux Standard Base(LSB) compliant.rpmpackages,.deb, Stampede (.slp),Solaris(.pkg) andSlackware(.tgz,.txz, .tbz, .tlz) packages. In mobile operating systems,Google PlayconsumesAndroid application package(APK) package format whileMicrosoft StoreusesAPPXandXAPformats. (Both Google Play and Microsoft Store have eponymous package managers.) By the nature offree and open source software, packages under similar and compatible licenses are available for use on a number of operating systems. These packages can be combined and distributed using configurable and internally complex packaging systems to handle many permutations of software and manage version-specific dependencies and conflicts. Some packaging systems of free and open source software are also themselves released as free and open source software. One typical difference between package management in proprietary operating systems, such as Mac OS X and Windows, and those in free and open source software, such as Linux, is that free and open source software systems permit third-party packages to also be installed and upgraded through the same mechanism, whereas the package managers of Mac OS X and Windows will only upgrade software provided by Apple and Microsoft, respectively (with the exception of some third party drivers in Windows). The ability to continuously upgrade third-party software is typically added by adding theURLof the corresponding repository to the package management's configuration file. Beside the system-level application managers, there are some add-on package managers for operating systems with limited capabilities and forprogramming languagesin which developers need the latestlibraries. Unlike system-level package managers, application-level package managers focus on a small part of the software system. They typically reside within a directory tree that is not maintained by the system-level package manager, such asc:\cygwinor/opt/sw.[29]However, this might not be the case for the package managers that deal with programming libraries, leading to a possible conflict as both package managers may claim to "own" a file and might break upgrades. Ian Murdockhad commented that package management is "the single biggest advancementLinuxhas brought to the industry", that it blurs the boundaries between operating system and applications, and that it makes it "easier to push new innovations [...] into the marketplace and [...] evolve the OS".[30] There is also a conference for package manager developers known as PackagingCon. It was established in 2021 with the aim to understand different approaches to package management.[31]
https://en.wikipedia.org/wiki/Package_manager
TrueOS(formerlyPC-BSDorPCBSD) is a discontinued[3]Unix-like, server-orientedoperating systembuilt upon the most recent releases ofFreeBSD-CURRENT.[4] Up to 2018 it aimed to be easy to install by using a graphical installation program, and easy and ready-to-use immediately by providingKDE SC,Lumina,LXDE,MATE, orXfce[5]as thedesktop environment. In June 2018 the developers announced that since TrueOS had become the core OS to provide a basis for other projects, the graphical installer had been removed.[6]Graphical end-user-oriented OSes formerly based on TrueOS wereGhostBSDand Trident.[7]TrueOS provided official binaryNvidiaandInteldrivers for hardware acceleration and an optional 3D desktop interface throughKWin, andWineis ready-to-use for runningMicrosoft Windowssoftware. TrueOS was also able to run Linux software[8]in addition toFreeBSD Ports collectionand it had its own.txzpackage manager. TrueOS supportedOpenZFSand the installer offereddisk encryptionwithgeli. Development of TrueOS ended in 2020.[3] TrueOS was founded by FreeBSD professional Kris Moore in early 2005 as PC-BSD. In August 2006 it was voted the most beginner-friendly operating system by OSWeekly.com.[9] The first beta of the PC-BSD consisted of only a GUI installer to get the user up and running with a FreeBSD 6 system with KDE3 pre-configured. This was a major innovation for the time as anyone wishing to install FreeBSD would have to manually tweak and run through a text installer. Kris Moore's goal was to make FreeBSD easy for everyone to use on the desktop and has since diverged even more in the direction of usability by including additional GUI administration tools and .pbi application installers. PC-BSD's application installer management involved a different approach to installing software than many otherUnix-likeoperating systems, up to and including version 8.2, by means of the pbiDIR website.[10]Instead of using theFreeBSD Portstree directly (although it remained available), PC-BSD used files with the.pbifilename extension(Push Button Installer) which, when double-clicked, brought up an installationwizardprogram. An autobuild system tracked the FreeBSD ports collection and generated new .pbi files daily. All software packages and dependencies were installed from inside of the .pbi files into their own self-contained directories in/Programs. This convention was aimed to decrease confusion about where binary programs reside, and to remove the possibility of a package breaking if system libraries are upgraded or changed, and to preventdependency hell. On October 10, 2006, PC-BSD was acquired by enterprise hardware provideriXsystems.[11][12]iXsystems employed Kris Moore as a full-time developer and leader of the project. In November 2007, iXsystems entered into a distribution agreement withFry's Electronicswhereby Fry's Electronics stores nationwide carry boxed copies of PC-BSD version 1.4 (Da Vinci Edition).[13]In January 2008, iXsystems entered into a similar agreement withMicro Center.[14] On September 1, 2016, the PC-BSD team announced that the name of the operating system would change to TrueOS.[4]Along with the rebranding, the project also became a rolling release distribution, based on the FreeBSD-CURRENT branch.[15] On November 15, 2016, TrueOS began the transition from FreeBSD's rc.d toOpenRCas the default init system. Apart fromGentoo/Alt, where OpenRC was initially developed, this is the only other major BSD based operating system using OpenRC. In July 2018, TrueOS announced that they would spin off the desktop edition into a separate project namedProject Trident.[16][17] Development of TrueOS ended in 2020 and the developers recommended users move to other BSD-based operating systems.[3] Since version 7, PC-BSD began following the same numbering system asFreeBSD. Since version 9.0, theKDE SC, customized to support tighter application integration and the .txz package management system, was no longer the onlydesktop environmentsupported by PC-BSD. While manual installation of other desktops such asXfceandGNOMEhad been technically possible in earlier releases, none of these were supported in the earlier versions, and major functionality was lost when not using PC-BSD's special build of KDE SC.[48]Starting with version 9.0, PC-BSD added other desktop environments, including GNOME, Xfce,LXDE, andMATE. PC-BSD used to support bothamd64andi686architectures. Support for i686 was dropped in version 9.2.[49][50] Starting in September 2016 with the rebranding of PC-BSD, TrueOS became a rolling release distribution based on FreeBSD's current branch.[4][15] TrueOS'spackage managertakes a similar approach to installing software to many otherUnix-likeoperating systems. Instead of using theFreeBSD Portstree directly (although it remains available), TrueOS uses files with the.txzfilename extensionpackages which contain compiled ports. An autobuild system tracked the FreeBSD ports collection and generated new .txz files daily. The TrueOS package management system aims to be visually similar to that of major operating systems such asMicrosoft WindowsandApplemacOS, where applications are installed from a single download link with graphical prompts, while maintaining internally the traditional .txz package management systems that many Unix-like systems use.[51]The TrueOS package manager also takes care of creating categorized links in the KDE menu and on the KDE SC desktop. In 2014, the PC-BSD project announced its development of a newdesktop environment, from scratch, namedLumina. Ken Moore is the main developer of Lumina, which is based on theQttoolkit.[52] As of July 2016, Lumina has its own web site.[53] The desktop environment is not an application development toolkit, and aims to be a graphical interface that only uses plugins for customization.[54] TrueOS was originally licensed under theGNU General Public License(GPL) because the developers were under the impression that applications using theQt, which TrueOS uses for its interface development, must be licensed under the GPL or theQ Public License. Upon discovering that there was, in fact, no such restriction, the TrueOS developers laterrelicensedthe code under a BSD-like 3-clause license.[55] TrueOSand the TrueOS logo are registeredtrademarksof iXsystems Inc.[56] The New York City *BSD User Group runs a service named dmesgd,[57]which provides user-submitteddmesginformation for different computer hardware (laptops,workstations,single-board computers,embedded systems,virtual machines, etc.) capable of running TrueOS. According to the TrueOS wiki,[58]TrueOS has the following hardware requirements: UEFIsupport (foramd64only) has been added to the installer and the boot manager since version 10.1 with the default EFI boot manager to berEFInd.[59]This includesACPIdetection and setup of Root System Description Pointer (RSDP),[60]eXtended System Descriptor Table (XSDT),[61]and Root System Description Table (RSDT)[62]pass-through values to thekernel. A new installation is needed in order to install UEFI support as it requires the creation of a smallFATpartition. The current UEFI does not support secure boot.
https://en.wikipedia.org/wiki/TrueOS#History
Asoftware applianceis asoftware applicationcombined withjust enough operating system (JeOS)to run optimally on industry-standard hardware (typically aserver) or in avirtual machine.[1]It is asoftware distributionorfirmwarethat implements acomputer appliance.[2][3] Virtual appliancesare a subset of software appliances. The main distinction is the packaging format and the specificity of the target platform. A virtual appliance is avirtual machine imagedesigned to run on a specificvirtualization platform, while a software appliance is often packaged in more generally applicable image format (e.g.,Live CD) that supports installations to physical machines and multiple types of virtual machines.[4][5][6] Installing a software appliance to a virtual machine and packaging that into an image, creates a virtual appliance. Software appliances have several benefits over traditional software applications that are installed on top of anoperating system: A software appliance can be packaged in avirtual machineformat as avirtual appliance, allowing it to be run within a virtual machine container. A virtual appliance could be built using either a standard virtual machine format such asOpen Virtualization Format (OVF), or a format specific to a particular virtual machine container (for example, VMware, VirtualBox, or Amazon EC2). Containersand their images (such as those provided byDockerand Docker Hub) can be seen as an implementation of software appliances. A software appliance can be packaged as aLive CDimage, allowing it to run on real hardware in addition to most types of virtual machines. This allows developers to avoid the complexities involved in supporting multiple incompatible virtual machine image formats and focus on the lowest common denominator instead (i.e., ISO images are supported by most Virtual Machine platforms). Commercial software appliances are typically sold as a subscription service (pay-as-you-go) and are an alternative approach tosoftware as a service. Customers can receive all service and maintenance from the application vendor, eliminating the requirement to manage multiple maintenance streams, licenses, and service contracts. In some cases, the application vendor may install the software appliance on a piece of hardware prior to delivery to the customer, thereby creating acomputer appliance. In both cases, the primary value to the customer remains the simplicity of purchase, deployment, and maintenance.
https://en.wikipedia.org/wiki/Software_appliance
Astatic libraryorstatically linked librarycontainsfunctionsand data that can be included in a consumingcomputer programatbuild-timesuch that the library does not need to be accessible in a separate file at run-time.[1]If all libraries are statically linked, then the resulting executable will bestand-alone, a.k.a. astatic build. A static library is either merged with other static libraries andobject filesat build-time to form a singleexecutableor loaded atrun-timeinto theaddress spaceof their corresponding executable at astatic memory offsetdetermined at compile-time/link-time. Historically, all library linking was static, but todaydynamic linkingis an alternative and entails inherent trade-offs. An advantage of static over dynamic is that the application is guaranteed to have the library routines it requires available at run-time, as the code to those routines is embedded in the executable file. With dynamic linking, not only might the library file be missing, but even if found, it could be an incompatible version. Static avoidsDLL Hellor more generallydependency helland therefore can simplify development, distribution and installation. Another trade-off is memory used to load the library. With static linking, a smart linker only includes the code that is actually used, but for a dynamic library, the entire library is loaded into memory. Another trade-off is that the size of the executable is larger with static linking than dynamic. But, if the size of an application is measured as the sum of the executable and its dynamic libraries, then overall size is generally less for static. Then again, if the same dynamic library is used by multiple applications, then overall size of the combined applications plus DLLs might be less with dynamic. A common practice onWindowsis to install a program's dynamic libraries with the program file.[2]OnUnix-likesystems this is less common aspackage management systemscan be used to ensure the correct library files are available in a shared, system location. This allows library files to be shared between applications leading to space savings. It also allows the library to be updated to fix bugs and security flaws without updating the applications that use the library. But shared, dynamic libraries leads to the risk of dependency problems. In practice, many executables use both static and dynamic libraries. Any static library function can call a function or procedure in another static library. Thelinkerand loader handle this the same way as for kinds of otherobject files. Static library files may be linked atrun timeby alinking loader(e.g., theX11module loader). However, whether such a process can be calledstatic linkingis controversial. Static libraries can be easily created inCor inC++. These two languages providestorage-class specifiersfor indicating external or internal linkage, in addition to providing other features. To create such a library, the exported functions/procedures and other objects variables must be specified forexternal linkage(i.e. by not using the Cstatickeyword). Static library filenames usually have ".a" extension onUnix-likesystems[1]and ".lib" extension onMicrosoft Windows. For example, on a Unix-like system, to create an archive namedlibclass.afrom filesclass1.o,class2.o,class3.o, the following command would be used:[1] to compile a program that depends onclass1.o,class2.o, andclass3.o, one could do: or (iflibclass.ais placed in standard library path, like/usr/local/lib) or (during linking) instead of:
https://en.wikipedia.org/wiki/Static_library
Asupply chain attackis acyber-attackthat seeks to damage an organization by targeting less secure elements in thesupply chain.[1]A supply chain attack can occur in any industry, from the financial sector, oil industry, to a government sector.[2]A supply chain attack can happen in software or hardware.[3]Cybercriminals typically tamper with the manufacturing or distribution of a product by installingmalwareor hardware-based spying components.[4]Symantec's 2019 Internet Security Threat Report states that supply chain attacks increased by 78 percent in 2018.[5] A supply chain is a system of activities involved in handling, distributing, manufacturing, and processing goods in order to move resources from a vendor into the hands of the final consumer. A supply chain is a complex network of interconnected players governed bysupply and demand.[6] Although supply chain attack is a broad term without a universally agreed upon definition,[7][8]in reference to cyber-security, a supply chain attack can involve physically tampering with electronics (computers, ATMs, power systems, factory data networks) in order to install undetectable malware for the purpose of bringing harm to a player further down the supply chain network.[2][4][9]Alternatively, the term can be used to describe attacks exploiting thesoftware supply chain, in which an apparently low-level or unimportant software component used by other software can be used to inject malicious code into the larger software that depends on the component.[10] In a more general sense, a supply chain attack may not necessarily involve electronics. In 2010 when burglars gained access to the pharmaceutical giantEli Lilly'ssupply warehouse, by drilling a hole in the roof and loading $80 million worth of prescription drugs into a truck, they could also have been said to carry out a supply chain attack.[11][12]However, this article will discuss cyber attacks on physical supply networks that rely on technology; hence, a supply chain attack is a method used bycyber-criminals.[13] Generally, supply chain attacks on information systems begin with anadvanced persistent threat(APT)[14]that determines a member of the supply network with the weakest cyber security in order to affect the target organization.[13]Hackers don't usually directly target a larger entity, such as the United States Government, but instead target the entity's software. The third-party software is often less protected, leading to an easier target.[15]According to an investigation produced by Verizon Enterprise, 92% of the cyber security incidents analyzed in their survey occurred among small firms.[16]Supply chain networks are considered to be particularly vulnerable due to their multiple interconnected components.[15] APT's can often gain access to sensitive information by physically tampering with the production of the product.[17]In October 2008, European law-enforcement officials "uncovered a highly sophisticated credit-card fraud ring" that stole customer's account details by using untraceable devices inserted into credit-card readers made in China to gain access to account information and make repeated bank withdrawals and Internet purchases, amounting to an estimated $100 million in losses.[18] The threat of a supply chain attack poses a significant risk to modern day organizations and attacks are not solely limited to the information technology sector; supply chain attacks affect the oil industry, large retailers, the pharmaceutical sector and virtually any industry with a complex supply network.[2][9] The Information Security Forum explains that the risk derived from supply chain attacks is due to information sharing with suppliers, it states that "sharing information with suppliers is essential for the supply chain to function, yet it also creates risk... information compromised in the supply chain can be just as damaging as that compromised from within the organization".[19] While Muhammad Ali Nasir of theNational University of Computer and Emerging Sciences, associates the above-mentioned risk with the wider trend of globalization stating "…due to globalization, decentralization, and outsourcing of supply chains, numbers of exposure points have also increased because of the greater number of entities involved and that too are scattered all around the globe… [a] cyber-attack on [a] supply chain is the most destructive way to damage many linked entities at once due to its ripple effect."[20] Poorly managedsupply chain managementsystems can become significant hazards for cyber attacks, which can lead to a loss of sensitive customer information, disruption of the manufacturing process, and could damage a company's reputation.[21] Wiredreported a connecting thread in recent software supply chain attacks, as of 3 May 2019.[22]These have been surmised to have spread from infected, pirated, popular compilers posted on pirate websites. That is, corrupted versions of Apple's XCode and Microsoft Visual Studio.[23](In theory, alternating compilers[24]might detect compiler attacks, when the compiler is the trusted root.) At the end of 2013,Target, a US retailer, was hit by one of the largest data breaches in the history of the retail industry.[25] Between 27 November and 15 December 2013, Target's American brick-and-mortar stores experienced a data hack. Around 40 million customers' credit and debit cards became susceptible to fraud after malware was introduced into thePOSsystem in over 1,800 stores.[25]The data breach of Target's customer information saw a direct impact on the company's profit, which fell 46 percent in the fourth quarter of 2013.[26] Six months prior the company began installing a $1.6 million cyber security system. Target had a team of security specialists to monitor its computers constantly. Nonetheless, the supply chain attack circumvented these security measures.[27] It is believed that cyber criminals infiltrated a third party supplier to gain access to Target's main data network.[28]Although not officially confirmed,[29]investigation officials suspect that the hackers first broke into Target's network on 15 November 2013 using passcode credentials stolen from Fazio Mechanical Services, a Pennsylvania-based provider ofHVACsystems.[30] Ninety lawsuits have been filed against Target by customers for carelessness and compensatory damages. Target spent around $61 million responding to the breach, according to its fourth-quarter report to investors.[31] Stuxnet is acomputer wormthat is widely believed to be a joint U.S.-Israeli cyber operation, though neither government has officially confirmed involvement. The worm specifically targetsindustrial control systems, particularly those that automate electromechanical processes, such as factory machinery andnuclear enrichmentequipment. Stuxnet was designed to manipulateprogrammable logic controllers(PLCs), disrupting industrial equipment by issuing unauthorized commands while simultaneously feeding falsified operations data to monitoring systems to conceal its activity.[32][33] Stuxnet is widely believed to have been developed to disrupt Iran'suranium enrichmentprograms. Kevin Hogan, Senior Director of Security Response atSymantec, stated that most infections occurred in Iran.[34]Analysts suggest that its primary target was theNatanzuranium enrichment facility.[32] Stuxnet was initially introduced into Iran'sNatanzfacility via infectedUSB flash drives, requiring physical access to the target network. According to reports, engineers or maintenance workers, either knowingly or unknowingly, facilitated its entry into the plant. Once inside, the worm spread autonomously, exploiting multiplezero-dayvulnerabilities in Windows systems to propagate across networked machines running Siemens industrial control software.[32][35][36] In recent years malware known as Suceful, Plotus, Tyupkin and GreenDispenser have affectedautomated teller machinesglobally, especially inRussiaandUkraine.[37]GreenDispenser specifically gives attackers the ability to walk up to an infected ATM system and remove its cash vault. When installed, GreenDispenser may display an 'out of service' message on the ATM, but attackers with the right access credentials can drain the ATM's cash vault and remove the malware from the system using an untraceable delete process.[38] The other types of malware usually behave in a similar fashion, capturing magnetic stripe data from the machine's memory storage and instructing the machines to withdraw cash. The attacks require a person with insider access, such as an ATM technician or anyone else with a key to the machine, to place the malware on the ATM.[39] The Tyupkin malware active in March 2014 on more than 50 ATMs at banking institutions in Eastern Europe, is believed to have also spread at the time to the U.S., India, and China. The malware affects ATMs from major manufacturers running Microsoft Windows 32-bit operating systems. The malware displays information on how much money is available in every machine and allows an attacker to withdraw 40 notes from the selected cassette of each ATM.[40] In June 2017, the financial software M.E.Doc, widely used in Ukraine, was identified by security researchers as a likely initial vector for the spread of theNotPetyamalware.Security researchers, including those fromMicrosoft, indicated that NotPetya infections may have originated from a compromised update issued through M.E.Doc. Some analysts described this as a supply chain attack, though the exact method of compromise was not definitively identified. The software's developers denied the claim but later deleted their statement and stated that they were cooperating with investigators.[41][42][43] NotPetya was initially identified asransomwarebecause it encrypted hard drives and displayed a ransom demand inbitcoin. However, the email account used to provide decryption keys was shut down, leaving victims without a way to recover their files. UnlikeWannaCry, NotPetya had no built-inkill switch, making it harder to stop. The attack affected multiple industries in Ukraine, including banks, an airport, the Kyiv metro, pharmaceutical companies, andChernobyl's radiation detection systems. It also spread globally, impacting organizations in Russia, the United Kingdom, India, and the United States.[44] NotPetya spread usingEternalBlue, a vulnerability originally developed by theU.S. National Security Agency (NSA)and later leaked. EternalBlue had previously been used in theWannaCrycyberattack in May 2017. This exploit enabled NotPetya to spread through the WindowsServer Message Block(SMB) protocol. The malware also used PsExec and theWindows Management Instrumentation(WMI) to spread within networks. Due to these exploits, once a device on a network was infected, the malware could rapidly spread to other connected systems.[44] Ukrainian police stated that M.E.Doc employees could face criminal liability for negligence, citing repeated warnings from antivirus firms about security vulnerabilities in the company'scybersecurityinfrastructure. The head of Ukraine's CyberPolice, Colonel Serhiy Demydiuk, alleged that M.E.Doc had been repeatedly warned by security firms about weaknesses in its systems but failed to act, stating, "They knew about it." Authorities later reported that M.E.Doc cooperated with investigators.[43] From August 21st until September 5th in 2018British Airways was under attack. TheBritish Airwayswebsite payment section contained a code that harvested customer payment data. The injected code was written specifically to route credit card information to a domain baways.com, which could erroneously be thought to belong to British Airways.[45] Magecart is the entity believed to be behind the attack. Magecart is a name attributed to multiple hacker groups that useskimmingpractices in order to steal customer information through online payment processes.[46]Approximately 380,000 customers had their personal and financial data compromised as a result of the attack. British Airways later reported in October, 2018 that an additional 185,000 customers may have had their personal information stolen as well.[47] The 2020 SolarWinds cyberattack was linked to a supply chain compromise targeting theITinfrastructure companySolarWinds, which provided software used by multiple U.S. federal institutions,[48][49]including networks within theNational Nuclear Security Administration(NNSA).[50][51]Russian hackers compromised Orion, a widely usednetwork management softwaredeveloped by SolarWinds, by injecting malicious code into software updates. This allowed them to gain unauthorized access to numerous organizations, including multiple U.S. government agencies that relied on Orion for IT monitoring and management.[52] On December 13, 2020, theU.S. Department of Homeland Securityissued Emergency Directive 21-01, "Mitigate SolarWinds Orion Code Compromise",requiring affected federal agencies to disconnect compromised Windows host OS instances from their enterprise domain and rebuild those hosts using trusted sources. These compromised systems had been running SolarWinds Orion.[53] In December 2020,FireEyeidentified a cyber breach involving the SolarWinds Orion software, which had been compromised prior to its discovery.Microsoftwas among the organizations affected, detecting and removing malicious files linked to the breach.[54][55]Microsoft has since collaborated with FireEye as part of an ongoing investigation into the incident. The cyberattack targeted supply chain software used across various industries, including government, consulting, technology, telecommunications, and extractive sectors in North America, Europe, Asia and the Middle East.[55] On January 5, 2021, a joint statement from theFederal Bureau of Investigation(FBI), theCybersecurity and Infrastructure Security Agency(CISA), theOffice of the Director of National Intelligence(ODNI), and theNational Security Agency(NSA) indicated that, while approximately 18,000 public and private sector entities were affected by the SolarWinds breach, fewer than ten U.S. government agencies were confirmed to have been compromised.[56] In February 2021Microsoftdetermined that the attackers had downloaded a few files "(subsets of service, security, identity)" apiece from[57] None of the Microsoft repositories contained production credentials.[57]The repositories were secured in December, and those attacks ceased in January.[57]However, in March 2021 more than 20,000 US organizations were compromised through a back door that was installed via flaws in Exchange Server.[58]The affected organizations use self-hosted e-mail (on-site rather than cloud-based) such as credit unions, town governments, and small businesses. The flaws were patched on 2 March 2021, but by 5 March 2021 only 10% of the compromised organizations had implemented the patch; the back door remains open.[59]The US officials are attempting to notify the affected organizations which are smaller than the organizations that were affected in December 2020.[60] Microsoft has updated its Indicators of Compromise tool and has released emergency mitigation measures for its Exchange Server flaws.[61]The attacks on SolarWinds and Microsoft software are currently thought to be independent, as of March 2021.[61]The Indicators of Compromise tool allows customers to scan their Exchange Server log files for compromise.[61][62][63]At least 10 attacking groups are using the Exchange Server flaws.[64][65][1]Web shellscan remain on a patched server; this still allows cyberattacks based on the affected servers.[66]As of 12 March 2021 exploit attempts are doubling every few hours, according to Check Point Research,[67]some in the name of security researchers themselves.[68] By 14 April 2021 theFBIhad completed a covert cyber operation to remove the web shells from afflicted servers and was informing the servers' owners of what had been done.[69] In May 2021 Microsoft identified 3000 malicious emails to 150 organizations in 24 countries, that were launched by a group that Microsoft has denoted 'Nobelium'. Many of those emails were blocked before delivery. 'Nobelium' gained access to a Constant Contact "email marketing account used by the US Agency for International Development (USAID)".[70]Security researchers assert that 'Nobelium' crafts spear-phishing email messages which get clicked on by unsuspecting users; the links then direct installation of malicious 'Nobelium' code to infect the users' systems, making them subject to ransom, espionage, disinformation, etc.[71]The US government has identified 'Nobelium' as stemming from Russia's Federal Security Service.[72]By July 2021 the US government is expected to name the initiator of the Exchange Server attacks:[73]"China's Ministry of State Security has been using criminal contract hackers".[74][75] In September 2021 the Securities and Exchange Commission (SEC) enforcement staff have requested that any companies which have downloaded any compromised SolarWinds updates, voluntarily turn over data to the SEC if they have installed the compromised updates on their servers.[76] In July 2022 SessionManager, a malicious module hosted by IIS (installed by default on Exchange Servers), was discovered to have infected Exchange Servers since March 2021; SessionManager searches memory for passwords, and downloads new modules, to hijack the server.[77] Mandiant, a security firm, has shown that nation-state-sponsored groups, once they have gained access to corporate clouds, can now exploit Security assertion markup language (SAML), to gain federated authentication to Active Directory and similar services, at will.[a]Once the attackers gain access, they are able to infiltrate any information or assets belonging to the organization. This is because this technique allows attackers to pose as any member of the targeted organization.[79]These attacks are progressively becoming more desirable to malicious actors as companies and agencies continue to move assets to cloud services.[80] In 2020,SolarWindswas subject to what is described as the first documented Golden SAML attack, often referred to as "Solorigate". A malicious actor infected the source code of a software update with a backdoor code made to look legitimate.[81]Customers began installing the faulty update to their systems, ultimately affecting over 18,000 individuals globally.[79]The attack affected a number of United States government agencies and private sector agencies as well.[80] In May 2021, a ransomware attack onColonial Pipelineforced a temporary shutdown of a major fuel distribution network, disrupting the supply of gasoline, diesel, and jet fuel to the U.S. East Coast. TheBiden administrationinvoked emergency powers to prevent shortages, while experts described the incident as the worst-ever cyberattack on U.S. infrastructure. The attack, attributed to the Russian-linked cybercriminal groupDarkSide, raised concerns about vulnerabilities in critical energy systems, as fuel traders sought alternative supply routes and fears of price spikes emerged.[82] On June 16, 2021, President Biden stated to President Putin that cyberattacks on 16 critical infrastructure sectors were off-limits and said that the U.S. would respond to future cyber threats.[83]The 16 critical infrastructure sectors, as designated by the U.S.Cybersecurity and Infrastructure Security Agency(CISA), include energy, food and agriculture, emergency services, healthcare, and other essential industries such as financial services, communications, and transportation systems.[84] In March, 2023, the voice and video chat app3CX Phone Systemwas thought to have been subject to a supply chain attack due to detection of malicious activity on the software. The app is used in a wide variety of industries from food to automotive and an attack has the potential to impact hundreds of thousands of users worldwide.[85]The malware infects the host device through the installation process, acting as aTrojan horse virusspread through bothMac OSandMicrosoftinstallers. They employed aninfostealerthrough a maliciouspayloadthat connected to aC2 servercontrolled by the threat actor.[86] The attack utilized the Gopuram backdoor, originally discovered by the Russian cybersecurity companyKasperskyin 2020. The use of this backdoor suggested that the attack was executed by the North Korean cybercrime group known asLazarusdue to their use of this same backdoor in a 2020 attack against a South Asian cryptocurrency company.[86]The Gopuram backdoor has been utilized in other past attacks against cryptocurrency agencies, which Lazarus has been known to target.[85] In July 2023, Chinese state-sponsored hackers targeted theUnited States Department of State, hacking several government employees' Microsoft email accounts, which gave them access to classified information. They stole information from about 60,000 emails from several Department of State employees.[87]Department of State officials have stated that the information stolen includes "victims' travel itineraries and diplomatic deliberations".[88]If used in a malicious manner, this information could be used to monitor important government officials and track United States communications that are meant to be confidential. The Department of State hack occurred due to vulnerabilities inMicrosoft Exchange Server, classifying it as a supply-chain attack.[87] In March 2024, a backdoor in xz/liblzma inXZ Utilswas suspected,[89]with malicious code known to be in version 5.6.0 and 5.6.1. While theexploitremained dormant unless a specific third-party patch of the SSH server is used, under the right circumstances this interference could potentially enable a malicious actor to break sshdauthenticationand gain unauthorized access to the entire systemremotely.[90] The list of affected Linux distributions includesDebian unstable,[91]Fedora Rawhide,[92]Kali Linux,[93]andOpenSUSE Tumbleweed.[94]Most Linux distributions that followed a stable release update model were not affected, since they were carrying older versions of xz.[95]Arch Linuxissued an advisory for users to update immediately, although it also noted that Arch's OpenSSH package does not include the common third-party patch necessary for the backdoor.[96]FreeBSDis not affected by this attack, as all supported FreeBSD releases include versions of xz that predate the affected releases and the attack targets Linux's glibc.[97] On October 31st, 2024, cybersecurity researchers from several security firms such as Phylum, Socket, and Checkmarx detected an attack on users of the open-source Node Package Manager (NPM) library. Unidentified attackers published more than 287 packages in an attempt to trick users of the platform into downloading malicious code.[98]The attack used a technique called typosquatting, which copies the names of legitimate packages closely, tricking unsuspecting developers into accidentally downloading the wrong one. For the package Fetch-mock-jest, the attacker rearranged the order of the words and misspelled the word fetch creating the name "jest-fet-mock". Based on the kind of packages mimicked, researchers believe this attack widely targeted software developers using NPM. Packages targeted are mostly mock HTTP requests and cryptocurrency-related, including Puppeteer, Bignum.js, and Fetch-mock-jest, which are mainly used in development environments.[99][unreliable source?] Phylum researchers noted that these typosquatted packages seemed normal at first glance, but upon closer inspection, they contained obfuscated code that could not be understood. After de-obfuscating the code, researchers found that after the malicious package is mistakenly downloaded it automatically runs a script that interacts with an Ethereum smart contract to retrieve the IP addresses of the command and control server (C2) used by the attackers. The script then identifies the operating system used by the victim machine and downloads compatible malware from the IP address it received from the contract. This malware maintains persistent communication with the attacker's C2 server, periodically leaking the user's system information such as the operating system version, GPU, CPU, the amount of memory on the machine, and username.[98] Checkmarkx researcher Yahud Gelb explains that if researchers attempt to take down a C2 server at a specific IP address, the attacker can just update the Ethereum contract so that it returns a different address. When describing the mechanism behind the contract he wrote: "Think of a smart contract on the Ethereum blockchain as a public bulletin board – anyone can read what's posted, but only the owner has the ability to update it". This complicates the issue because the malware can always query the smart contract to update the stored address of the C2 server in case the current one has been taken down by authorities. Researchers worried that several companies' software development supply chains can be put at risk when attackers typosquat them. They elaborate that the untraceable nature of the attack combined with its precisely engineered methods of persistence only adds to the looming threat. Furthermore, company employees usually have elevated system privileges and access to CI/CD pipelines when using development environments, further endangering the company's and their customer's data. They warned that developers who use npm packages like the ones above at any stage of the software development lifecycle must take caution and implement robust dependency scanning before performing any installations.[100] There is little to no information on the attackers' identity or their motive. However, researchers did find error messages written in Russian within the de-obfuscated code of the malicious packages, but they speculate that this could be a misdirect set up by the real culprits trying to throw off any suspicions.[100]Phylum, Checkmarx, and Socket researchers brought to attention the ever-evolving nature of supply chain attacks, and how threat actors have had to continuously come up with creative ways to subvert detection of the servers under their control, highlighting the importance of double-checking any dependencies downloaded during the development phase of a project. On 12 May 2021, Executive order 14028 (the EO),Improving the nation's cybersecurity, taskedNISTas well as other US government agencies with enhancing the cybersecurity of the United States.[101]On 11 July 2021 (day 60 of the EO timeline) NIST, in consultation with theCybersecurity and Infrastructure Security Agency(CISA) and theOffice of Management and Budget(OMB), delivered '4i': guidance for users of critical software, as well as '4r': for minimum vendor testing of the security and integrity of the software supply chain.[101] TheComprehensive National Cybersecurity Initiativeand the Cyberspace Policy Review passed by the Bush and Obama administrations respectively, direct U.S. federal funding for development of multi-pronged approaches for global supply chain risk management.[104][105]According to Adrian Davis of the Technology Innovation Management Review, securing organizations from supply chain attacks begins with building cyber-resilient systems.[106]Supply chain resilienceis, according to supply chain risk management expert Donal Walters, "the ability of the supply chain to cope with unexpected disturbances" and one of its characteristics is a company-wide recognition of where the supply chain is most susceptible to infiltration. Supply chain management plays a crucial role in creating effective supply chain resilience.[107] In March 2015, under the Conservative and Liberal democratic government coalition, the UK Department for Business outlined new efforts to protectSMEsfrom cyber attacks, which included measures to improve supply chain resilience.[108] The UK government has produced the Cyber Essentials Scheme, which trains firms for good practices to protect their supply chain and overall cyber security.[109][110] TheDepository Trust and ClearingGroup, an American post-trade company, in its operations has implemented governance for vulnerability management throughout its supply chain and looks at IT security along the entire development lifecycle; this includes where software was coded and hardware manufactured.[111] In a 2014 PwC report, titled "Threat Smart: Building a Cyber Resilient Financial Institution", the financial services firm recommends the following approach to mitigating a cyber attack: "To avoid potential damage to a financial institution's bottom line, reputation, brand, and intellectual property, the executive team needs to take ownership of cyber risk. Specifically, they should collaborate up front to understand how the institution will defend against and respond to cyber risks, and what it will take to make their organization cyber resilient.[112] FireEye, a US network security company that provides automated threat forensics and dynamic malware protection against advanced cyber threats, such as advanced persistent threats and spear phishing,[113]recommends firms to have certain principles in place to create resilience in their supply chain, which includes having:[114] On 27 April 2015, Sergey Lozhkin, a Senior Security Researcher with GReAT atKaspersky Lab, spoke about the importance of managing risk from targeted attacks and cyber-espionage campaigns, during a conference on cyber security he stated: "Mitigation strategies for advanced threats should include security policies and education, network security, comprehensive system administration and specialized security solutions, like... software patching features, application control, whitelisting and a default deny mode."[116]
https://en.wikipedia.org/wiki/Supply_chain_attack