text
stringlengths
21
172k
source
stringlengths
32
113
Flat memory modelorlinear memory modelrefers to amemory addressingparadigm in which "memoryappears to the program as a single contiguousaddress space."[1]TheCPUcan directly (andlinearly)addressall of the availablememorylocations without having to resort to any sort ofbank switching,memory segmentationorpagingschemes. Memory management andaddress translationcan still be implementedon top ofa flat memory model in order to facilitate theoperating system's functionality, resource protection,multitaskingor to increase the memory capacity beyond the limits imposed by the processor's physical address space, but the key feature of a flat memory model is that the entire memory space is linear, sequential and contiguous. In a simple controller, or in asingle taskingembedded application, where memory management is not needed nor desirable, the flat memory model is the most appropriate, because it provides the simplest interface from the programmer's point of view, with direct access to all memory locations and minimum design complexity. In a general purpose computer system, which requires multitasking, resource allocation, and protection, the flat memory system must be augmented by some memory management scheme, which is typically implemented through a combination of dedicated hardware (inside or outside the CPU) and software built into the operating system. The flat memory model (at the physical addressing level) still provides the greatest flexibility for implementing this type of memory management. Most modern memory models fall into one of three categories: Within the x86 architectures, when operating in thereal mode(or emulation), physical address is computed as:[2] (I.e., the 16-bit segment register is shifted left by 4 bits and added to a 16-bit offset, resulting in a 20-bit address.)
https://en.wikipedia.org/wiki/Linear_address_space
Afitness functionis a particular type ofobjective or cost functionthat is used to summarize, as a singlefigure of merit, how close a given candidate solution is to achieving the set aims. It is an important component ofevolutionary algorithms (EA), such asgenetic programming,evolution strategiesorgenetic algorithms. An EA is ametaheuristicthat reproduces the basic principles ofbiological evolutionas acomputer algorithmin order to solve challengingoptimizationorplanningtasks, at leastapproximately. For this purpose, many candidate solutions are generated, which are evaluated using a fitness function in order to guide the evolutionary development towards the desired goal.[1]Similar quality functions are also used in other metaheuristics, such asant colony optimizationorparticle swarm optimization. In the field of EAs, each candidate solution, also called anindividual, is commonly represented as a string of numbers (referred to as achromosome). After each round of testing or simulation the idea is to delete thenworst individuals, and tobreednnew ones from the best solutions. Each individual must therefore to be assigned a quality number indicating how close it has come to the overall specification, and this is generated by applying the fitness function to the test or simulation results obtained from that candidate solution.[2] Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as inniche differentiationorco-evolvingthe set of test cases.[3][4]Another way of looking at fitness functions is in terms of afitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run. A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases,[5]such astournament selectionorPareto optimization. The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-basedselectionmechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from theMonte Carlomethod. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section onauxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all. Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired.Interactive genetic algorithmsaddress this difficulty by outsourcing evaluation to external agents which are normally humans. The fitness function should not only closely align with the designer's goal, but also be computationally efficient. Execution speed is crucial, as a typical evolutionary algorithm must be iterated many times in order to produce a usable result for a non-trivial problem. Fitness approximation[6][7]may be appropriate, especially in the following cases: Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on thepopulation modelof the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel.[9][10][11] Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose,Pareto optimizationand optimization based on fitness calculated using theweighted sum.[12] When optimizing with the weighted sum, the single values of theO{\displaystyle O}objectives are first normalized so that they can be compared. This can be done with the help of costs or by specifying target values and determining the current value as the degree of fulfillment. Costs or degrees of fulfillment can then be compared with each other and, if required, can also be mapped to a uniform fitness scale.Without loss of generality, fitness is assumed to represent a value to be maximized. Each objectiveoi{\displaystyle o_{i}}is assigned a weightwi{\displaystyle w_{i}}in the form of a percentage value so that the overall raw fitnessfraw{\displaystyle f_{raw}}can be calculated as a weighted sum: fraw=∑i=1Ooi⋅wiwith∑i=1Owi=1{\displaystyle f_{raw}=\sum _{i=1}^{O}{o_{i}\cdot w_{i}}\quad {\mathsf {with}}\quad \sum _{i=1}^{O}{w_{i}}=1} A violation ofR{\displaystyle R}restrictionsrj{\displaystyle r_{j}}can be included in the fitness determined in this way in the form ofpenalty functions. For this purpose, a functionpfj(rj){\displaystyle pf_{j}(r_{j})}can be defined for each restriction which returns a value between0{\displaystyle 0}and1{\displaystyle 1}depending on the degree of violation, with the result being1{\displaystyle 1}if there is no violation. The previously determined raw fitness is multiplied by the penalty function(s) and the result is then the final fitnessffinal{\displaystyle f_{final}}:[13] ffinal=fraw⋅∏j=1Rpfj(rj)=∑i=1O(oi⋅wi)⋅∏j=1Rpfj(rj){\displaystyle f_{final}=f_{raw}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}=\sum _{i=1}^{O}{(o_{i}\cdot w_{i})}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}} This approach is simple and has the advantage of being able to combine any number of objectives and restrictions. The disadvantage is that different objectives can compensate each other and that the weights have to be defined before the optimization. This means that the compromise lines must be defined before optimization, which is why optimization with the weighted sum is also referred to as thea priori method.[12]In addition, certain solutions may not be obtained, see the section on thecomparison of both types of optimization. A solution is called Pareto-optimal if the improvement of one objective is only possible with a deterioration of at least one other objective. The set of all Pareto-optimal solutions, also called Pareto set, represents the set of all optimal compromises between the objectives. The figure below on the right shows an example of the Pareto set of two objectivesf1{\displaystyle f_{1}}andf2{\displaystyle f_{2}}to be maximized. The elements of the set form the Pareto front (green line). From this set, a human decision maker must subsequently select the desired compromise solution.[12]Constraints are included in Pareto optimization in that solutions without constraint violations are per se better than those with violations. If two solutions to be compared each have constraint violations, the respective extent of the violations decides.[14] It was recognized early on that EAs with their simultaneously considered solution set are well suited to finding solutions in one run that cover the Pareto front sufficiently well.[14][15]They are therefore well suited asa-posteriori methodsfor multi-objective optimization, in which the final decision is made by a human decision maker after optimization and determination of the Pareto front.[12]Besides the SPEA2,[16]the NSGA-II[17]and NSGA-III[18][19]have established themselves as standard methods. The advantage of Pareto optimization is that, in contrast to the weighted sum, it provides all alternatives that are equivalent in terms of the objectives as an overall solution. The disadvantage is that a visualization of the alternatives becomes problematic or even impossible from four objectives on. Furthermore, the effort increases exponentially with the number of objectives.[13]If there are more than three or four objectives, some have to be combined using the weighted sum or other aggregation methods.[12] With the help of the weighted sum, the total Pareto front can be obtained by a suitable choice of weights, provided that it isconvex.[20]This is illustrated by the adjacent picture on the left. The pointP{\displaystyle {\mathsf {P}}}on the green Pareto front is reached by the weightsw1{\displaystyle w_{1}}andw2{\displaystyle w_{2}}, provided that the EA converges to the optimum. The direction with the largest fitness gain in the solution setZ{\displaystyle Z}is shown by the drawn arrows. In case of a non-convex front, however, non-convex front sections are not reachable by the weighted sum. In the adjacent image on the right, this is the section between pointsA{\displaystyle {\mathsf {A}}}andB{\displaystyle {\mathsf {B}}}. This can be remedied to a limited extent by using an extension of the weighted sum, thecascaded weighted sum.[13] Comparing both assessment approaches, the use of Pareto optimization is certainly advantageous when little is known about the possible solutions of a task and when the number of optimization objectives can be narrowed down to three, at most four. However, in the case of repeated optimization of variations of one and the same task, the desired lines of compromise are usually known and the effort to determine the entire Pareto front is no longer justified. This is also true when no human decision is desired or possible after optimization, such as in automated decision processes.[13] In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work stepdearlier, which is a necessary intermediate step for an earlier start of the last work stepeof the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in.[21]
https://en.wikipedia.org/wiki/Fitness_function
Astandards organization,standards body,standards developing organization(SDO), orstandards setting organization(SSO) is an organization whose primary function is developing, coordinating, promulgating, revising, amending, reissuing, interpreting, or otherwise contributing to the usefulness oftechnical standards[1]to those who employ them. Such an organization works to create uniformity across producers, consumers, government agencies, and other relevant parties regarding terminology, product specifications (e.g. size, including units of measure), protocols, and more. Its goals could include ensuring that Company A's external hard drive works on Company B's computer, an individual's blood pressure measures the same with Company C'ssphygmomanometeras it does with Company D's, or that all shirts that should not be ironed have the same icon (aclothes ironcrossed out with an X) on the label.[2] Most standards are voluntary in the sense that they are offered for adoption by people or industry without being mandated in law. Some standards become mandatory when they are adopted by regulators as legal requirements in particular domains, often for the purpose of safety or for consumer protection from deceitful practices. The termformal standardrefers specifically to a specification that has been approved by a standards setting organization. The termde jurestandardrefers to a standard mandated by legal requirements or refers generally to any formal standard. In contrast, the termde facto standardrefers to a specification (or protocol or technology) that has achieved widespread use and acceptance – often without being approved by any standards organization (or receiving such approval only after it already has achieved widespread use). Examples of de facto standards that were not approved by any standards organizations (or at least not approved until after they were in widespreadde factouse) include theHayes command setdeveloped byHayes,Apple'sTrueTypefont design and thePCLprotocol used byHewlett-Packardin thecomputer printersthey produced. Normally, the termstandards organizationis not used to refer to the individual parties participating within the standards developing organization in the capacity of founders,benefactors,stakeholders, members or contributors, who themselves may function as or lead the standards organizations. The implementation of standards in industry and commerce became highly important with the onset of theIndustrial Revolutionand the need for high-precisionmachine toolsandinterchangeable parts.Henry Maudslaydeveloped the first industrially practicalscrew-cutting lathein 1800, which allowed for the standardization ofscrew threadsizes for the first time.[1] Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization; some companies' in-house standards also began to spread more widely within their industries.Joseph Whitworth's screw thread measurements were adopted as the first (unofficial) national standard by companies around Britain in 1841. It came to be known as theBritish Standard Whitworth, and was widely adopted in other countries.[3][4] By the end of the 19th century differences in standards between companies was making trade increasingly difficult and strained. For instance, in 1895 an iron and steel dealer recorded his displeasure inThe Times: "Architects and engineers generally specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible. In this country no two professional men are agreed upon the size and weight of agirderto employ for given work".[5] TheEngineering Standards Committeewas established in London in 1901 as the world's first national standards body.[6][7]It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929. The national standards were adopted universally throughout the country, and enabled the markets to act more rationally and efficiently, with an increased level of cooperation. After theFirst World War, similar national bodies were established in other countries. TheDeutsches Institut für Normungwas set up in Germany in 1917, followed by its counterparts, the AmericanNational Standard Instituteand the FrenchCommission Permanente de Standardisation, both in 1918.[1] Severalinternational organizationscreateinternational standards, such asCodex Alimentariusin food, theWorld Health OrganizationGuidelines in health, orITURecommendations inICT[8]and being publicly funded, are freely available for consideration and use worldwide. In 1904, Crompton represented Britain at theLouisiana Purchase ExpositioninSt. Louis,Missouri, as part of a delegation by theInstitute of Electrical Engineers. He presented a paper on standardization, which was so well received that he was asked to look into the formation of a commission to oversee the process. By 1906, his work was complete and he drew up a permanent terms for theInternational Electrotechnical Commission.[9]The body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardization,Lord Kelvinwas elected as the body's first President.[10] TheInternational Federation of the National Standardizing Associations(ISA) was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 duringWorld War II. After the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met inLondonand agreed to join forces to create the newInternational Organization for Standardization; the new organization officially began operations in February 1947.[11] Standards organizations can be classified by their role, position, and the extent of their influence on the local, national, regional, and global standardization arena. By geographic designation, there are international, regional, and national standards bodies (the latter often referred to as NSBs). By technology or industry designation, there are standards developing organizations (SDOs) and also standards setting organizations (SSOs) also known as consortia. Standards organizations may be governmental, quasi-governmental or non-governmental entities. Quasi- and non-governmental standards organizations are often non-profit organizations. Broadly, an international standards organization developsinternational standards(this does not necessarily restrict the use of other published standards internationally). There are many international standards organizations. The three largest and most well-established such organizations are theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC), and theInternational Telecommunication Union(ITU), which have each existed for more than 50 years (founded in 1947, 1906, and 1865, respectively) and are all based inGeneva,Switzerland. They have established tens of thousands of standards covering almost every conceivable topic. Many of these are then adopted worldwide replacing various incompatible "homegrown" standards. Many of these standards are naturally evolved from those designed in-house within an industry, or by a particular country, while others have been built from scratch by groups of experts who sit on various technical committees (TCs). These three organizations together comprise theWorld Standards Cooperation(WSC) alliance. ISO is composed of the national standards bodies (NSBs), one per member economy. The IEC is similarly composed of national committees, one per member economy. In some cases, the national committee to the IEC of an economy may also be the ISO member from that country or economy. ISO and IEC are private international organizations that are not established by any international treaty. Their members may be non-governmental organizations or governmental agencies, as selected by ISO and IEC (which are privately established organizations). The ITU is a treaty-based organization established as a permanent agency of theUnited Nations, in which governments are the primary members,[citation needed]although other organizations (such as non-governmental organizations and individual companies) can also hold a form of direct membership status in the ITU as well. Another example of a treaty-based international standards organization with government membership is theCodex Alimentarius Commission. In addition to these, a large variety of independent international standards organizations such as theASME,ASTM International, theInternational Commission on Illumination (CIE), theIEEE, theInternet Engineering Task Force(IETF),SAE International,TAPPI, theWorld Wide Web Consortium(W3C), and theUniversal Postal Union(UPU) develop and publish standards for a variety of international uses. In many such cases, these international standards organizations are not based on the principle of one member per country. Rather, membership in such organizations is open to those interested in joining and willing to agree to the organization's by-laws – having either organizational/corporate or individual technical experts as members. The Airlines Electronic Engineering Committee (AEEC) was formed in 1949 to prepare avionics system engineering standards with other aviation organizations RTCA, EUROCAE, and ICAO. The standards are widely known as the ARINC Standards. Regional standards bodies also exist, such as theEuropean Committee for Standardization(CEN), theEuropean Committee for Electrotechnical Standardization(CENELEC), theEuropean Telecommunications Standards Institute(ETSI), and theInstitute for Reference Materials and Measurements(IRMM) in Europe, thePacific Area Standards Congress(PASC), thePan American Standards Commission(COPANT), theAfrican Organisation for Standardisation(ARSO), theArabic industrial development and mining organization(AIDMO), and others. In the European Union, only standards created by CEN, CENELEC, and ETSI are recognized asEuropean standards(according to Regulation (EU) No 1025/2012[12]), and member states are required to notify the European Commission and each other about all the draft technical regulations concerning ICT products and services before they are adopted in national law.[13]These rules were laid down in Directive 98/34/EC with the goal of providing transparency and control with regard to technical regulations.[13] Sub-regional standards organizations also exist such as theMERCOSURStandardization Association (AMN), theCARICOM Regional Organisation for Standards and Quality(CROSQ), and the ASEAN Consultative Committee for Standards and Quality (ACCSQ), EAC East Africa Standards Committeewww.eac-quality.net, and theGCC Standardization Organization (GSO)forArab States of the Persian Gulf. In general, each country or economy has a single recognized national standards body (NSB). A national standards body is likely the sole member from that economy in ISO; ISO currently has 161 members. National standards bodies usually do not prepare the technical content of standards, which instead is developed by national technical societies. NSBs may be either public or private sector organizations, or combinations of the two. For example, the Standards Council of Canada is a CanadianCrown Corporation, Dirección General de Normas is a governmental agency within the Mexican Ministry of Economy, and ANSI is a501(c)(3)non-profit U.S. organization with members from both the private and public sectors. TheNational Institute of Standards and Technology(NIST), the U.S. government's standards agency, cooperates with ANSI under amemorandum of understandingto collaborate on the United States Standards Strategy. The determinates of whether an NSB for a particular economy is a public or private sector body may include the historical and traditional roles that the private sector fills in public affairs in that economy or the development stage of that economy. Anational standards body(NSB) generally refers to one standardization organization that is that country's member of theISO. Astandards developing organization(SDO) is one of the thousands of industry- or sector-based standards organizations that develop and publish industry specific standards. Some economies feature only an NSB with no other SDOs. Large economies like the United States and Japan have several hundred SDOs, many of which are coordinated by the central NSBs of each country (ANSI and JISC in this case). In some cases, international industry-based SDOs such as theCIE, theIEEEand theAudio Engineering Society(AES) may have direct liaisons with international standards organizations, having input to international standards without going through a national standards body. SDOs are differentiated from standards setting organizations (SSOs) in that SDOs may be accredited to develop standards using open and transparent processes. Developers of technical standards are generally concerned withinterface standards, which detail how products interconnect with each other, andsafety standards, which established characteristics ensure that a product or process is safe for humans, animals, and the environment. The subject of their work can be narrow or broad. Another area of interest is in defining how the behavior and performance of products is measured and described in data sheets. Overlapping or competing standards bodies tend to cooperate purposefully, by seeking to define boundaries between the scope of their work, and by operating in a hierarchical fashion in terms of national, regional and international scope; international organizations tend to have as members national organizations; and standards emerging at national level (such asBS 5750) can be adopted at regional levels (BS 5750 was adopted as EN 29000) and at international levels (BS 5750 was adopted as ISO 9000). Unless adopted by a government, standards carry no force in law. However, most jurisdictions havetruth in advertisinglaws, and ambiguities can be reduced if a company offers a product that is "compliant" with a standard. When an organization develops standards that may be used openly, it is common to have formal rules published regarding the process. This may include: Though it can be a tedious and lengthy process, formal standard setting is essential to developing new technologies. For example, since 1865, the telecommunications industry has depended on theITUto establish the telecommunications standards that have been adopted worldwide. The ITU has created numerous telecommunications standards including telegraph specifications, allocation of telephone numbers, interference protection, and protocols for a variety of communications technologies. The standards that are created through standards organizations lead to improved product quality, ensuredinteroperabilityof competitors' products, and they provide a technological baseline for future research and product development. Formal standard setting through standards organizations has numerous benefits for consumers including increased innovation, multiple market participants, reduced production costs, and the efficiency effects of product interchangeability. To support the standard development process,ISOpublished Good Standardization Practices (GSP)[25]and theWTOTechnical Barriers to Trade (TBT) Committee published the "Six Principles" guiding members in the development of international standards.[26] Some standards—such as theSIF Specificationin K12 education—are managed by a non-profit organization composed of public entities and private entities working in cooperation that then publish the standards under an open license at no charge and requiring no registration. A technical library at a university may have copies of technical standards on hand. Major libraries in large cities may also have access to many technical standards. Some users of standards mistakenly assume that all standards are in thepublic domain. This assumption is correct only for standards produced by thecentral governmentswhose publications are not amenable tocopyrightor to organizations that issue their standard under an open license. Any standards produced by non-governmental entities remain theintellectual propertyof their developers (unless specifically designed otherwise) and are protected, just like any otherpublications, bycopyrightlaws and internationaltreaties. However, the intellectual property extends only to the standard itself and not to its use. For instance if a company sells a device that is compliant with a given standard, it is not liable for further payment to the standards organization except in the special case when the organization holds patent rights or some other ownership of the intellectual property described in the standard. It is, however, liable for any patent infringement by its implementation, just as with any other implementation of technology. The standards organizations give no guarantees that patents relevant to a given standard have been identified. ISO standards draw attention to this in the foreword with a statement like the following: "Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO and IEC shall not be held responsible for identifying any or all such patent rights".[27]If the standards organization is aware that parts of a given standard fall under patent protection, it will often require the patent holder to agree toreasonable and non-discriminatory licensingbefore including it in the standard. Such an agreement is regarded as a legally binding contract,[28]as in the 2012 caseMicrosoft v. Motorola. The ever-quickening pace of technology evolution is now more than ever affecting the way new standards are proposed, developed and implemented. Since traditional, widely respected standards organizations tend to operate at a slower pace than technology evolves, many standards they develop are becoming less relevant because of the inability of their developers to keep abreast with the technological innovation. As a result, a new class of standards setters appeared on thestandardizationarena: theindustry consortiaor standards setting organizations (SSOs), which are also referred to asprivate standards.[29]Despite having limited financial resources, some of them enjoy truly international acceptance. One example is theWorld Wide Web Consortium(W3C), whose standards forHTML,CSS, andXMLare used universally. There are also community-driven associations such as theInternet Engineering Task Force(IETF), a worldwide network of volunteers who collaborate to set standards for internet protocols. Some industry-driven standards development efforts don't even have a formal organizational structure. They are projects funded by large corporations. Among them are theOpenOffice.org, anApache Software Foundation-sponsored international community of volunteers working on anopen-standardsoftware that aims to compete withMicrosoft Office, and two commercial groups competing fiercely with each other to develop an industry-wide standard forhigh-density optical storage. Another example is theGlobal Food Safety Initiativewhere members of theConsumer Goods Forumdefine benchmarking requirements forharmonizationand recognize scheme owners usingprivate standardsforfood safety. Also, editors of Wikipedia follow their own self-imposedrules for editing. In 2024, the 118th U.S. Congress considered a bill to clarify copyright protection of standards incorporated by reference in legislation.[30]The proposed law would require free public online access to standards, where they could be viewed, but not printed or downloaded. Archived29 August 2012 at theWayback Machine
https://en.wikipedia.org/wiki/Standards_organization
TheArtin reciprocity law, which was established byEmil Artinin a series of papers (1924; 1927; 1930), is a general theorem innumber theorythat forms a central part of globalclass field theory.[1]The term "reciprocity law" refers to a long line of more concrete number theoretic statements which it generalized, from thequadratic reciprocity lawand the reciprocity laws ofEisensteinandKummertoHilbert'sproduct formula for thenorm symbol. Artin's result provided a partial solution toHilbert's ninth problem. LetL/K{\displaystyle L/K}be aGalois extensionofglobal fieldsandCL{\displaystyle C_{L}}stand for theidèle class groupofL{\displaystyle L}. One of the statements of theArtin reciprocity lawis that there is a canonical isomorphism called theglobal symbol map[2][3] whereab{\displaystyle {\text{ab}}}denotes theabelianizationof a group, andGal⁡(L/K){\displaystyle \operatorname {Gal} (L/K)}is theGalois groupofL{\displaystyle L}overK{\displaystyle K}. The mapθ{\displaystyle \theta }is defined by assembling the maps called thelocal Artin symbol, thelocal reciprocity mapor thenorm residue symbol[4][5] for differentplacesv{\displaystyle v}ofK{\displaystyle K}. More precisely,θ{\displaystyle \theta }is given by the local mapsθv{\displaystyle \theta _{v}}on thev{\displaystyle v}-component of an idèle class. The mapsθv{\displaystyle \theta _{v}}are isomorphisms. This is the content of thelocal reciprocity law, a main theorem oflocal class field theory. A cohomological proof of the global reciprocity law can be achieved by first establishing that constitutes aclass formationin the sense of Artin and Tate.[6]Then one proves that whereH^i{\displaystyle {\hat {H}}^{i}}denote theTate cohomology groups. Working out the cohomology groups establishes thatθ{\displaystyle \theta }is an isomorphism. Artin's reciprocity law implies a description of theabelianizationof the absoluteGalois groupof aglobal fieldKwhich is based on theHasse local–global principleand the use of theFrobenius elements. Together with theTakagi existence theorem, it is used to describe theabelian extensionsofKin terms of the arithmetic ofKand to understand the behavior of thenonarchimedean placesin them. Therefore, theArtin reciprocity lawcan be interpreted as one of the main theorems of global class field theory. It can be used to prove thatArtin L-functionsaremeromorphic, and also to prove theChebotarev density theorem.[7] Two years after the publication of his general reciprocity law in 1927, Artin rediscovered thetransfer homomorphismof I. Schur and used the reciprocity law to translate theprincipalization problemfor ideal classes ofalgebraic numberfields into the group theoretic task of determining the kernels of transfers of finite non-abelian groups.[8] (Seemath.stackexchange.comfor an explanation of some of the terms used here) The definition of the Artin map for afiniteabelian extensionL/Kofglobal fields(such as a finite abelian extension ofQ{\displaystyle \mathbb {Q} }) has a concrete description in terms ofprime idealsandFrobenius elements. Ifp{\displaystyle {\mathfrak {p}}}is a prime ofKthen thedecomposition groupsof primesP{\displaystyle {\mathfrak {P}}}abovep{\displaystyle {\mathfrak {p}}}are equal in Gal(L/K) since the latter group isabelian. Ifp{\displaystyle {\mathfrak {p}}}isunramifiedinL, then the decomposition groupDp{\displaystyle D_{\mathfrak {p}}}is canonically isomorphic to the Galois group of the extension of residue fieldsOL,P/P{\displaystyle {\mathcal {O}}_{L,{\mathfrak {P}}}/{\mathfrak {P}}}overOK,p/p{\displaystyle {\mathcal {O}}_{K,{\mathfrak {p}}}/{\mathfrak {p}}}. There is therefore a canonically defined Frobenius element in Gal(L/K) denoted byFrobp{\displaystyle \mathrm {Frob} _{\mathfrak {p}}}or(L/Kp){\displaystyle \left({\frac {L/K}{\mathfrak {p}}}\right)}. If Δ denotes therelative discriminantofL/K, theArtin symbol(orArtin map, or(global) reciprocity map) ofL/Kis defined on thegroup of prime-to-Δ fractional ideals,IKΔ{\displaystyle I_{K}^{\Delta }}, by linearity: TheArtin reciprocity law(orglobal reciprocity law) states that there is amoduluscofKsuch that the Artin map induces an isomorphism whereKc,1is theray moduloc, NL/Kis the norm map associated toL/KandILc{\displaystyle I_{L}^{\mathbf {c} }}is the fractional ideals ofLprime toc. Such a moduluscis called adefining modulus forL/K. The smallest defining modulus is called theconductor ofL/Kand typically denotedf(L/K).{\displaystyle {\mathfrak {f}}(L/K).} Ifd≠1{\displaystyle d\neq 1}is asquarefree integer,K=Q,{\displaystyle K=\mathbb {Q} ,}andL=Q(d){\displaystyle L=\mathbb {Q} ({\sqrt {d}})}, thenGal⁡(L/Q){\displaystyle \operatorname {Gal} (L/\mathbb {Q} )}can be identified with {±1}. The discriminant Δ ofLoverQ{\displaystyle \mathbb {Q} }isdor 4ddepending on whetherd≡ 1 (mod 4) or not. The Artin map is then defined on primespthat do not divide Δ by where(Δp){\displaystyle \left({\frac {\Delta }{p}}\right)}is theKronecker symbol.[9]More specifically, the conductor ofL/Q{\displaystyle L/\mathbb {Q} }is the principal ideal (Δ) or (Δ)∞ according to whether Δ is positive or negative,[10]and the Artin map on a prime-to-Δ ideal (n) is given by the Kronecker symbol(Δn).{\displaystyle \left({\frac {\Delta }{n}}\right).}This shows that a primepis split or inert inLaccording to whether(Δp){\displaystyle \left({\frac {\Delta }{p}}\right)}is 1 or −1. Letm> 1 be either an odd integer or a multiple of 4, letζm{\displaystyle \zeta _{m}}be aprimitivemth root of unity, and letL=Q(ζm){\displaystyle L=\mathbb {Q} (\zeta _{m})}be themthcyclotomic field.Gal⁡(L/Q){\displaystyle \operatorname {Gal} (L/\mathbb {Q} )}can be identified with(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}by sending σ toaσgiven by the rule The conductor ofL/Q{\displaystyle L/\mathbb {Q} }is (m)∞,[11]and the Artin map on a prime-to-mideal (n) is simplyn(modm) in(Z/mZ)×.{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }.}[12] Letpandℓ{\displaystyle \ell }be distinct odd primes. For convenience, letℓ∗=(−1)ℓ−12ℓ{\displaystyle \ell ^{*}=(-1)^{\frac {\ell -1}{2}}\ell }(which is always 1 (mod 4)). Then, quadratic reciprocity states that The relation between the quadratic and Artin reciprocity laws is given by studying the quadratic fieldF=Q(ℓ∗){\displaystyle F=\mathbb {Q} ({\sqrt {\ell ^{*}}})}and the cyclotomic fieldL=Q(ζℓ){\displaystyle L=\mathbb {Q} (\zeta _{\ell })}as follows.[9]First,Fis a subfield ofL, so ifH= Gal(L/F) andG=Gal⁡(L/Q),{\displaystyle G=\operatorname {Gal} (L/\mathbb {Q} ),}thenGal⁡(F/Q)=G/H.{\displaystyle \operatorname {Gal} (F/\mathbb {Q} )=G/H.}Since the latter has order 2, the subgroupHmust be the group of squares in(Z/ℓZ)×.{\displaystyle (\mathbb {Z} /\ell \mathbb {Z} )^{\times }.}A basic property of the Artin symbol says that for every prime-to-ℓ ideal (n) Whenn=p, this shows that(ℓ∗p)=1{\displaystyle \left({\frac {\ell ^{*}}{p}}\right)=1}if and only if,pmodulo ℓ is inH, i.e. if and only if,pis a square modulo ℓ. An alternative version of the reciprocity law, leading to theLanglands program, connectsArtin L-functionsassociated to abelian extensions of anumber fieldwith Hecke L-functions associated to characters of the idèle class group.[13] AHecke character(or Größencharakter) of a number fieldKis defined to be aquasicharacterof the idèle class group ofK.Robert Langlandsinterpreted Hecke characters asautomorphic formson thereductive algebraic groupGL(1) over thering of adelesofK.[14] LetE/K{\displaystyle E/K}be an abelian Galois extension withGalois groupG. Then for anycharacterσ:G→C×{\displaystyle \sigma :G\to \mathbb {C} ^{\times }}(i.e. one-dimensional complexrepresentationof the groupG), there exists a Hecke characterχ{\displaystyle \chi }ofKsuch that where the left hand side is the Artin L-function associated to the extension with character σ and the right hand side is the Hecke L-function associated with χ, Section 7.D of.[14] The formulation of the Artin reciprocity law as an equality ofL-functions allows formulation of a generalisation ton-dimensional representations, though a direct correspondence is still lacking.
https://en.wikipedia.org/wiki/Artin_symbol
Working memoryis a cognitive system with a limited capacity that canhold informationtemporarily.[1]It is important for reasoning and the guidance of decision-making and behavior.[2][3]Working memory is often used synonymously withshort-term memory, but some theorists consider the two forms of memory distinct, assuming that working memory allows for the manipulation of stored information, whereas short-term memory only refers to the short-term storage of information.[2][4]Working memory is a theoretical concept central tocognitive psychology, neuropsychology, andneuroscience. The term "working memory" was coined byMiller,Galanter, andPribram,[5][6]and was used in the 1960s in the context oftheories that likened the mind to a computer. In 1968,Atkinson and Shiffrin[7]used the term to describe their "short-term store". The term short-term store was the name previously used for working memory. Other suggested names wereshort-term memory, primary memory, immediate memory, operant memory, and provisional memory.[8]Short-term memory is the ability to remember information over a brief period (in the order of seconds). Most theorists today use the concept of working memory to replace or include the older concept of short-term memory, marking a stronger emphasis on the notion of manipulating information rather than mere maintenance.[citation needed] The earliest mention of experiments on the neural basis of working memory can be traced back to more than 100 years ago, whenHitzigandFerrierdescribedablationexperiments of theprefrontal cortex(PFC); they concluded that the frontal cortex was important for cognitive rather than sensory processes.[9]In 1935 and 1936, Carlyle Jacobsen and colleagues were the first to show the deleterious effect of prefrontal ablation on delayed response.[9][10] Numerous models have been proposed for how working memory functions, both anatomically and cognitively. Of those, the two that have been most influential are summarized below. In 1974BaddeleyandHitch[11]introduced themulticomponent model of working memory. The theory proposed a model containing three components: the central executive, the phonological loop, and the visuospatial sketchpad with the central executive functioning as a control center of sorts, directing info between the phonological and visuospatial components.[12]Thecentral executiveis responsible for, among other things, directingattentionto relevant information, suppressing irrelevant information and inappropriate actions, and coordinating cognitive processes when more than one task is simultaneously performed. A "central executive" is responsible for supervising the integration of information and for coordinating subordinate systems responsible for the short-term maintenance of information. One subordinate system, thephonological loop(PL), stores phonological information (that is, the sound of language) and prevents its decay by continuously refreshing it in arehearsalloop. It can, for example, maintain a seven-digit telephone number for as long as one repeats the number to oneself repeatedly.[13]The other subordinate system, thevisuospatial sketchpad, stores visual and spatial information. It can be used, for example, for constructing and manipulating visual images and for representing mental maps. The sketchpad can be further broken down into a visual subsystem (dealing with such phenomena as shape, colour, and texture), and a spatial subsystem (dealing with location).[citation needed] In 2000 Baddeley extended the model by adding a fourth component, theepisodic buffer, which holds representations that integrate phonological, visual, and spatial information, and possibly information not covered by the subordinate systems (e.g., semantic information, musical information). The episodic buffer is also the link between working memory and long-term memory.[14]The component is episodic because it is assumed to bind information into a unitary episodic representation. The episodic buffer resembles Tulving's concept ofepisodic memory, but it differs in that the episodic buffer is a temporary store.[15] Anders EricssonandWalter Kintsch[16]have introduced the notion of "long-term working memory", which they define as a set of "retrieval structures" in long-term memory that enable seamless access to the information relevant for everyday tasks. In this way, parts of long-term memory effectively function as working memory. In a similar vein,Cowandoes not regard working memory as a separate system fromlong-term memory. Representations in working memory are a subset of representations in long-term memory. Working memory is organized into two embedded levels. The first consists of long-term memory representations that are activated. There can be many of these—there is theoretically no limit to the activation of representations in long-term memory. The second level is called the focus of attention. The focus is regarded as having a limited capacity and holds up to four of the activated representations.[17] Oberauer has extended Cowan's model by adding a third component—a more narrow focus of attention that holds only one chunk at a time. The one-element focus is embedded in the four-element focus and serves to select a single chunk for processing. For example, four digits can be held in mind at the same time in Cowan's "focus of attention". When the individual wishes to perform a process on each of these digits—for example, adding the number two to each digit—separate processing is required for each digit since most individuals cannot perform several mathematical processes in parallel.[18]Oberauer's attentional component selects one of the digits for processing and then shifts the attentional focus to the next digit, continuing until all digits have been processed.[19] Working memory is widely acknowledged as having limited capacity. An early quantification of the capacity limit associated with short-term memory was the "magical number seven" suggested by Miller in 1956.[20]Miller claimed that the information-processing capacity of young adults is around seven elements, referred to as "chunks", regardless of whether the elements are digits, letters, words, or other units. Later research revealed this number depends on the category of chunks used (e.g., span may be around seven for digits, six for letters, and five for words), and even on features of thechunkswithin a category. For instance, attention span is lower for longer words than short words. In general, memory span for verbal contents (digits, letters, words, etc.) depends on the phonological complexity of the content (i.e., the number of phonemes, the number of syllables),[21]and on the lexical status of the contents (whether the contents are words known to the person or not).[22]Several other factors affect a person's measured span, and therefore it is difficult to pin down the capacity of short-term or working memory to a number of chunks. Nonetheless, Cowan proposed that working memory has a capacity of about four chunks in young adults (and fewer in children and old adults).[23] In the visual domain, some investigations report no fixed capacity limit with respect to the total number of items that can be held in working memory. Instead, the results argue for a limited resource that can be flexibly shared between items retained in memory (see below in Resource theories), with some items in the focus of attention being allocated more resource and recalled with greater precision.[24][25][26][27] Whereas most adults can repeat about seven digits in correct order, some individuals have shown impressive enlargements of their digit span—up to 80 digits. This feat is possible by extensive training on an encoding strategy by which the digits in a list are grouped (usually in groups of three to five) and these groups are encoded as a single unit (a chunk). For this to succeed, participants must be able to recognize the groups as some known string of digits. One person studied by Ericsson and his colleagues, for example, used an extensive knowledge of racing times from the history of sports in the process of coding chunks: several such chunks could then be combined into a higher-order chunk, forming a hierarchy of chunks. In this way, only some chunks at the highest level of the hierarchy must be retained in working memory, and for retrieval the chunks are unpacked. That is, the chunks in working memory act as retrieval cues that point to the digits they contain. Practicing memory skills such as these does not expand working memory capacity proper: it is the capacity to transfer (and retrieve) information from long-term memory that is improved, according to Ericsson and Kintsch (1995; see also Gobet & Simon, 2000[28]). Working memory capacity can be tested by a variety of tasks. A commonly used measure is a dual-task paradigm, combining amemory spanmeasure with a concurrent processing task, sometimes referred to as "complex span". Daneman and Carpenter invented the first version of this kind of task, the "reading span", in 1980.[29]Subjects read a number of sentences (usually between two and six) and tried to remember the last word of each sentence. At the end of the list of sentences, they repeated back the words in their correct order. Other tasks that do not have this dual-task nature have also been shown to be good measures of working memory capacity.[30]Whereas Daneman and Carpenter believed that the combination of "storage" (maintenance) and processing is needed to measure working memory capacity, we know now that the capacity of working memory can be measured with short-term memory tasks that have no additional processing component.[31][32]Conversely, working memory capacity can also be measured with certain processing tasks that do not involve maintenance of information.[33][34]The question of what features a task must have to qualify as a good measure of working memory capacity is a topic of ongoing research. Recently, several studies of visual working memory have used delayed response tasks. These use analogue responses in a continuous space, rather than a binary (correct/incorrect) recall method, as often used in visual change detection tasks. Instead of asking participants to report whether a change occurred between the memory and probe array, delayed reproduction tasks require them to reproduce the precise quality of a visual feature, e.g. an object's location, orientation or colour.[24][25][26][27]In addition, the combination of visual perception such as within objects and colors can be used to improve memory strategy through elaboration, thus creating reinforcement within the capacity of working memory.[35] Measures of working-memory capacity are strongly related to performance in other complex cognitive tasks, such as reading comprehension, problem solving, and with measures ofintelligence quotient.[36] Some researchers have argued[37]that working-memory capacity reflects the efficiency of executive functions, most notably the ability to maintain multiple task-relevant representations in the face of distracting irrelevant information; and that such tasks seem to reflect individual differences in the ability to focus and maintain attention, particularly when other events are serving to capture attention. Both working memory and executive functions rely strongly, though not exclusively, on frontal brain areas.[38] Other researchers have argued that the capacity of working memory is better characterized as the ability to mentally form relations between elements, or to grasp relations in given information. This idea has been advanced, among others, by Graeme Halford, who illustrated it by our limited ability to understand statistical interactions between variables.[39]These authors asked people to compare written statements about the relations between several variables to graphs illustrating the same or a different relation, as in the following sentence: "If the cake is from France, then it has more sugar if it is made with chocolate than if it is made with cream, but if the cake is from Italy, then it has more sugar if it is made with cream than if it is made of chocolate". This statement describes a relation between three variables (country, ingredient, and amount of sugar), which is the maximum most individuals can understand. The capacity limit apparent here is obviously not a memory limit (all relevant information can be seen continuously) but a limit to how many relationships are discerned simultaneously.[citation needed] There are several hypotheses about the nature of the capacity limit. One is that a limited pool of cognitive resources is needed to keep representations active and thereby available for processing, and for carrying out processes.[40]Another hypothesis is that memory traces in working memory decay within a few seconds, unless refreshed through rehearsal, and because the speed of rehearsal is limited, we can maintain only a limited amount of information.[41]Yet another idea is that representations held in working memory interfere with each other.[42] The assumption that the contents of short-term or working memorydecayover time, unless decay is prevented by rehearsal, goes back to the early days of experimental research on short-term memory.[43][44]It is also an important assumption in the multi-component theory of working memory.[45]The most elaborate decay-based theory of working memory to date is the "time-based resource sharing model".[46]This theory assumes that representations in working memory decay unless they are refreshed. Refreshing them requires an attentional mechanism that is also needed for any concurrent processing task. When there are small time intervals in which the processing task does not require attention, this time can be used to refresh memory traces. The theory therefore predicts that the amount of forgetting depends on the temporal density (rate and duration) of attentional demands of the processing task—this density is calledcognitive load. The cognitive load depends on two variables, the rate at which the processing task requires individual steps to be carried out, and the duration of each step. For example, if the processing task consists of adding digits, then having to add another digit every half-second places a higher cognitive load on the system than having to add another digit every two seconds. In a series of experiments, Barrouillet and colleagues have shown that memory for lists of letters depends neither on the number of processing steps nor the total time of processing but on cognitive load.[47] Resource theories assume that the capacity of working memory is a limited resource that must be shared between all representations that need to be maintained in working memory simultaneously.[24]Some resource theorists also assume that maintenance and concurrent processing share the same resource;[40]this can explain why maintenance is typically impaired by a concurrent processing demand. Resource theories have been very successful in explaining data from tests of working memory for simple visual features, such as colors or orientations of bars. An ongoing debate is whether the resource is a continuous quantity that can be subdivided among any number of items in working memory, or whether it consists of a small number of discrete "slots", each of which can be assigned to one memory item, so that only a limited number of about 3 items can be maintained in working memory at all.[48] Several forms ofinterferencehave been discussed by theorists. One of the oldest ideas is that new items simply replace older ones in working memory. Another form of interference is retrieval competition. For example, when the task is to remember a list of 7 words in their order, we need to start recall with the first word. While trying to retrieve the first word, the second word, which is represented in proximity, is accidentally retrieved as well, and the two compete for being recalled. Errors in serial recall tasks are often confusions of neighboring items on a memory list (so-called transpositions), showing that retrieval competition plays a role in limiting our ability to recall lists in order, and probably also in other working memory tasks. A third form of interference is the distortion of representations by superposition: When multiple representations are added on top of each other, each of them is blurred by the presence of all the others.[49]A fourth form of interference assumed by some authors is feature overwriting.[50][51]The idea is that each word, digit, or other item in working memory is represented as a bundle of features, and when two items share some features, one of them steals the features from the other. As more items are held in working memory, whose features begin to overlap, the more each of them will be degraded by the loss of some features.[citation needed] None of these hypotheses can explain the experimental data entirely. The resource hypothesis, for example, was meant to explain the trade-off between maintenance and processing: The more information must be maintained in working memory, the slower and more error prone concurrent processes become, and with a higher demand on concurrent processing memory suffers. This trade-off has been investigated by tasks like the reading-span task described above. It has been found that the amount of trade-off depends on the similarity of the information to be remembered and the information to be processed. For example, remembering numbers while processing spatial information, or remembering spatial information while processing numbers, impair each other much less than when material of the same kind must be remembered and processed.[52]Also, remembering words and processing digits, or remembering digits and processing words, is easier than remembering and processing materials of the same category.[53]These findings are also difficult to explain for the decay hypothesis, because decay of memory representations should depend only on how long the processing task delays rehearsal or recall, not on the content of the processing task. A further problem for the decay hypothesis comes from experiments in which the recall of a list of letters was delayed, either by instructing participants to recall at a slower pace, or by instructing them to say an irrelevant word once or three times in between recall of each letter. Delaying recall had virtually no effect on recall accuracy.[54][55]Theinterference theoryseems to fare best with explaining why the similarity between memory contents and the contents of concurrent processing tasks affects how much they impair each other. More similar materials are more likely to be confused, leading to retrieval competition. The capacity of working memory increases gradually over childhood[56]and declines gradually in old age.[57] Measures of performance on tests of working memory increase continuously between early childhood and adolescence, while the structure of correlations between different tests remains largely constant.[56]Starting with work in the Neo-Piagetian tradition,[58][59]theorists have argued that the growth of working-memory capacity is a major driving force of cognitive development. This hypothesis has received substantial empirical support from studies showing that the capacity of working memory is a strong predictor of cognitive abilities in childhood.[60]Particularly strong evidence for a role of working memory for development comes from a longitudinal study showing that working-memory capacity at one age predicts reasoning ability at a later age.[61]Studies in the Neo-Piagetian tradition have added to this picture by analyzing the complexity of cognitive tasks in terms of the number of items or relations that have to be considered simultaneously for a solution. Across a broad range of tasks, children manage task versions of the same level of complexity at about the same age, consistent with the view that working memory capacity limits the complexity they can handle at a given age.[62]One experiment has correlated that a decrease of complexity regarding capacity limits are articulated from research concerning language processes, outlining the effect on the capacity of children with language disorders, having performed lower than their age-matched peers. A correlation between memory storage deficits can be viewed as a contribution due to these language disorders, or rather the cause of the language disorder, but has not fully suggested a deficit in being able to rehearse information.[63] Although neuroscience studies support the notion that children rely on prefrontal cortex for performing various working memory tasks, anfMRImeta-analysis on children compared to adults performing the n back task revealed a lack of consistent prefrontal cortex activation in children, while posterior regions including theinsular cortexandcerebellumremain intact.[64] Working memory is among the cognitive functions most sensitive to decline inold age.[65][66]Several explanations for this decline have been offered. One is the processing speed theory of cognitive aging by Tim Salthouse.[67]Drawing on the finding that cognitive processes generally slow as people grow older, Salthouse argues that slower processing leaves more time for working memory content to decay, thus reducing effective capacity. However, the decline of working memory capacity cannot be entirely attributed to slowing because capacity declines more in old age than speed.[66][68]Another proposal is the inhibition hypothesis advanced byLynn Hasherand Rose Zacks.[69]This theory assumes a general deficit in old age in the ability to inhibit irrelevant information. Thus, working memory should tend to be cluttered with irrelevant content that reduces effective capacity for relevant content. The assumption of an inhibition deficit in old age has received much empirical support[70]but, so far, it is not clear whether the decline in inhibitory ability fully explains the decline of working memory capacity. An explanation on the neural level of the decline of working memory and other cognitive functions in old age has been proposed by West.[71]She argues that working memory depends to a large degree on theprefrontal cortex, which deteriorates more than other brain regions as we grow old. The prefrontal cortex hemodynamics also play an important role in the impairment of working memory through a prevalence of sleeping disorders that many older adults face but it is not the only region that is influenced since other brain regions have demonstrated an output of influence within neuroimaging studies.[72][73]Within the studies of fMRI, a connection between sleep deprivation was observed through a reduction of performance on the prefrontal cortex and a overall decrease in working memory performance.[74]Age-related decline in working memory can be briefly reversed using low intensity transcranial stimulation to synchronize rhythms in prefrontal and temporal areas.[75] The neurobiological bases for reduced working memory abilities has been studied in aging macaques, who naturally develop impairments in working memory and the executive functions.[76]Research has shown that aged macaques have reduced working memory-related neuronal firing in the dorsolateral prefrontal cortex, that arises in part from excessive cAMP-PKA-calcium signaling, which opens nearby potassium channels that weaken the glutamate synapses on spines needed to maintain persistent firing across the delay period when there is no sensory stimulation.[77]Dysregulation of this process with age likely involves increased inflammation with age.[78]Sustained weakness leads to loss of dendritic spines, the site of essential glutamate connections.[79] Some studies in the effects of training on working memory, including the first byTorkel Klingberg, suggest that working memory in those withADHDcan improve by training.[80]This study found that a period ofworking memory trainingincreases a range of cognitive abilities and increases IQ test scores. Another study by the same group[81]has shown that, after training, measured brain activity related to working memory increased in the prefrontal cortex, an area that many researchers have associated with working memory functions. One study has shown that working memory training increases the density ofprefrontalandparietaldopamine receptors(specifically,DRD1) in test subjects.[82]However, subsequent experiments with the same training program have shown mixed results, with some successfully replicating, and others failing to replicate the beneficial effects of training on cognitive performance.[83] In another influential study, training with a working memory task (the dualn-backtask) improved performance on a fluidintelligence testin healthy young adults.[84]The improvement of fluid intelligence by training with the n-back task was replicated in 2010,[85]but two studies published in 2012 failed to reproduce the effect.[86][87]The combined evidence from about 30 experimental studies on the effectiveness of working-memory training has been evaluated by several meta-analyses.[88][89]The authors of these meta-analyses disagree in their conclusions as to whether or not working-memory training improves intelligence. Yet these meta-analyses agree that, the more distant the outcome measure, the weaker is the causal link – training working memory almost always yields increases in working memory, often in attention, and sometimes in academic performance, but it is still an outstanding question what exact circumstances differs between cases of successful and unsuccessful transfer of effects.[90][83] The first insights into the neuronal and neurotransmitter basis of working memory came from animal research. The work of Jacobsen[91]and Fulton in the 1930s first showed that lesions to the PFC impaired spatial working memory performance in monkeys. The later work ofJoaquin Fuster[92]recorded the electrical activity of neurons in the PFC of monkeys while they were doing a delayed matching task. In that task, the monkey sees how the experimenter places a bit of food under one of two identical-looking cups. A shutter is then lowered for a variable delay period, screening off the cups from the monkey's view. After the delay, the shutter opens and the monkey is allowed to retrieve the food from under the cups. Successful retrieval in the first attempt – something the animal can achieve after some training on the task – requires holding the location of the food in memory over the delay period. Fuster found neurons in the PFC that fired mostly during the delay period, suggesting that they were involved in representing the food location while it was invisible. Later research has shown similar delay-active neurons also in the posteriorparietal cortex, thethalamus, thecaudate, and theglobus pallidus.[93]The work ofGoldman-Rakicand others showed that principal sulcal, dorsolateral PFC interconnects with all of these brain regions, and that neuronal microcircuits within PFC are able to maintain information in working memory through recurrent excitatory glutamate networks of pyramidal cells that continue to fire throughout the delay period.[94]These circuits are tuned by lateral inhibition from GABAergic interneurons.[95]The neuromodulatory arousal systems markedly alter PFC working memory function; for example, either too little or too much dopamine or norepinephrine impairs PFC network firing[96]and working memory performance.[97]A brain network analysis demonstrates that the FPC network requires less induced energy during working memory tasks than other functional brain networks. This finding underscores the efficient processing of the FPC network and highlights its crucial role in supporting working memory processes.[98] The research described above on persistent firing of certain neurons in the delay period of working memory tasks shows that the brain has a mechanism of keeping representations active without external input. Keeping representations active, however, is not enough if the task demands maintaining more than one chunk of information. In addition, the components and features of each chunk must be bound together to prevent them from being mixed up. For example, if a red triangle and a green square must be remembered at the same time, one must make sure that "red" is bound to "triangle" and "green" is bound to "square". One way of establishing such bindings is by having the neurons that represent features of the same chunk fire in synchrony, and those that represent features belonging to different chunks fire out of sync.[99]In the example, neurons representing redness would fire in synchrony with neurons representing the triangular shape, but out of sync with those representing the square shape. So far, there is no direct evidence that working memory uses this binding mechanism, and other mechanisms have been proposed as well.[100]It has been speculated that synchronous firing of neurons involved in working memory oscillate with frequencies in thethetaband (4 to 8 Hz). Indeed, the power of theta frequency in the EEG increases with working memory load,[101]and oscillations in the theta band measured over different parts of the skull become more coordinated when the person tries to remember the binding between two components of information.[102] Localization of brain functions in humans has become much easier with the advent ofbrain imagingmethods (PETandfMRI). This research has confirmed that areas in the PFC are involved in working memory functions. During the 1990s much debate had centered on the different functions of the ventrolateral (i.e., lower areas) and thedorsolateral (higher) areas of the PFC. A human lesion study provides additional evidence for the role of thedorsolateral prefrontal cortexin working memory.[103]One view was that the dorsolateral areas are responsible for spatial working memory and the ventrolateral areas for non-spatial working memory. Another view proposed a functional distinction, arguing that ventrolateral areas are mostly involved in pure maintenance of information, whereas dorsolateral areas are more involved in tasks requiring some processing of the memorized material. The debate is not entirely resolved but most of the evidence supports the functional distinction.[104] Brain imaging has revealed that working memory functions are not limited to the PFC. A review of numerous studies[105]shows areas of activation during working memory tasks scattered over a large part of the cortex. There is a tendency for spatial tasks to recruit more right-hemisphere areas, and for verbal and object working memory to recruit more left-hemisphere areas. The activation during verbal working memory tasks can be broken down into one component reflecting maintenance, in the left posterior parietal cortex, and a component reflecting subvocal rehearsal, in the left frontal cortex (Broca's area, known to be involved in speech production).[106] There is an emerging consensus that most working memory tasks recruit a network of PFC and parietal areas. A study has shown that during a working memory task the connectivity between these areas increases.[107]Another study has demonstrated that these areas are necessary for working memory, and not simply activated accidentally during working memory tasks, by temporarily blocking them throughtranscranial magnetic stimulation(TMS), thereby producing an impairment in task performance.[108] A current debate concerns the function of these brain areas. The PFC has been found to be active in a variety of tasks that require executive functions.[38]This has led some researchers to argue that the role of PFC in working memory is in controlling attention, selecting strategies, and manipulating information in working memory, but not in maintenance of information. The maintenance function is attributed to more posterior areas of the brain, including the parietal cortex.[109][110]Other authors interpret the activity in parietal cortex as reflectingexecutive functions, because the same area is also activated in other tasks requiring attention but not memory.[111]Evidence from decoding studying employing multi-voxel-pattern-analysis of fMRI data showed the content of visual working memory can be decoded from activity patterns in visual cortex, but not prefrontal cortex.[112]This led to the suggestion that the maintenance function of visual working memory is performed by visual cortex while the role of the prefrontal cortex is in executive control over working memory[112]though it has been pointed out that such comparisons do not take into account the base rate of decoding across different regions.[113] A 2003 meta-analysis of 60 neuroimaging studies found leftfrontalcortex was involved in low-task demand verbal working memory and rightfrontalcortex for spatial working memory. Brodmann's areas (BAs)6,8, and9, in thesuperior frontal cortexwas involved when working memory must be continuously updated and when memory for temporal order had to be maintained. Right Brodmann10and47in the ventral frontal cortex were involved more frequently with demand for manipulation such as dual-task requirements or mental operations, and Brodmann 7 in theposterior parietal cortexwas also involved in all types of executive function.[114]Updating information in visual working memory is also influenced by the functional neural network connecting different brain regions.[115]Thedorsolateral PFCplays a crucial role in this process. In particular, themiddle frontal gyrusmay be involved in the maintenance, and the frontal operculum in the controlled processing of materials in working memory.[115]Studies have also shown the role of attentional switching in working memory updating, mediated by thesuperior parietal lobule.[115]Working memory updating also involves a repetition mechanism mediated by the temporal cortex.[115]And in addition, the process of working memory updating involves the sensory cortex to encode and store certain visual stimuli, such as geometric shapes (inferior occipital gyrus) and faces (fusiform gyrus).[115] Working memory has been suggested to involve two processes with different neuroanatomical locations in the frontal and parietal lobes.[116]First, a selection operation that retrieves the most relevant item, and second an updating operation that changes the focus of attention made upon it. Updating the attentional focus has been found to involve the transient activation in the caudalsuperior frontal sulcusandposterior parietal cortex, while increasing demands on selection selectively changes activation in the rostral superior frontal sulcus and posterior cingulate/precuneus.[116] Articulating the differential function of brain regions involved in working memory is dependent on tasks able to distinguish these functions.[117]Most brain imaging studies of working memory have used recognition tasks such as delayed recognition of one or several stimuli, or the n-back task, in which each new stimulus in a long series must be compared to the one presented n steps back in the series. The advantage of recognition tasks is that they require minimal movement (just pressing one of two keys), making fixation of the head in the scanner easier. Experimental research and research on individual differences in working memory, however, has used largely recall tasks (e.g., thereading span task, see below). It is not clear to what degree recognition and recall tasks reflect the same processes and the same capacity limitations. Brain imaging studies have been conducted with the reading span task or related tasks. Increased activation during these tasks was found in the PFC and, in several studies, also in theanterior cingulate cortex(ACC). People performing better on the task showed larger increase of activation in these areas, and their activation was correlated more over time, suggesting that their neural activity in these two areas was better coordinated, possibly due to stronger connectivity.[118][119] One approach to modeling the neurophysiology and the functioning of working memory isprefrontal cortex basal ganglia working memory (PBWM). In this model, the prefrontal cortex works hand-in-hand with the basal ganglia to accomplish the tasks of working memory. Many studies have shown this to be the case.[120]One used ablation techniques in patients who had had seizures and had damage to the prefrontal cortex and basal ganglia.[121]Researchers found that such damage resulted in decreased capacity to carry out the executive function of working memory.[121]Additional research conducted on patients with brain alterations due to methamphetamine use found that training working memory increases volume in the basal ganglia.[122] Working memory isimpaired by acute and chronic psychological stress. This phenomenon was first discovered in animal studies by Arnsten and colleagues,[123]who have shown that stress-inducedcatecholaminerelease in PFC rapidly decreases PFC neuronal firing and impairs working memory performance through feedforward, intracellular signaling pathways that open potassium channels to rapidly weaken prefrontal network connections.[124]This process of rapid changes in network strength is called Dynamic Network Connectivity,[125]and can be seen in human brain imaging when cortical functional connectivity rapidly changes in response to a stressor.[126]Exposure to chronic stress leads to more profound working memory deficits and additional architectural changes in PFC, including dendritic atrophy and spine loss,[127]which can be prevented by inhibition of protein kinase C signaling.[128]fMRIresearch has extended this research to humans, and confirms that reduced working memory caused by acute stress links to reduced activation of the PFC, and stress increased levels ofcatecholamines.[129]Imaging studies of medical students undergoing stressful exams have also shown weakened PFC functional connectivity, consistent with the animal studies.[130]The marked effects of stress on PFC structure and function may help to explain how stress can cause or exacerbate mental illness. The more stress in one's life, the lower the efficiency of working memory in performing simple cognitive tasks. Students who performed exercises that reduced the intrusion of negative thoughts showed an increase in their working memory capacity. Mood states (positive or negative) can have an influence on the neurotransmitter dopamine, which in turn can affect problem solving.[131] Excessive alcohol use can result in brain damage which impairs working memory.[132]Alcohol has an effect on theblood-oxygen-level-dependent(BOLD) response. The BOLD response correlates increased blood oxygenation with brain activity, which makes this response a useful tool for measuring neuronal activity.[133]The BOLD response affects regions of the brain such as the basal ganglia and thalamus when performing a working memory task. Adolescents who start drinking at a young age show a decreased BOLD response in these brain regions.[134]Alcohol dependent young women in particular exhibit less of a BOLD response in parietal and frontal cortices when performing a spatial working memory task.[135]Binge drinking, specifically, can also affect one's performance on working memory tasks, particularly visual working memory.[136][137]Additionally, there seems to be a gender difference in regards to how alcohol affects working memory. While women perform better on verbal working memory tasks after consuming alcohol compared to men, they appear to perform worse on spatial working memory tasks as indicated by less brain activity.[138][139]Finally, age seems to be an additional factor. Older adults are more susceptible than others to theeffects of alcohol on working memory.[140] Individual differences in working-memory capacity are to some extentheritable; that is, about half of the variation between individuals is related to differences in their genes.[141][142][143]The genetic component of variability of working-memory capacity is largely shared with that of fluid intelligence.[142][141] Little is known about which genes are related to the functioning of working memory. Within the theoretical framework of the multi-component model, one candidate gene has been proposed, namelyROBO1for the hypotheticalphonological loopcomponent of working memory.[144] More recently another gene was found regarding working memory. Looking at genetically diverse mice,GPR12was found in promoting a protein necessary for working memory. When they took mice that were performing worse on memory tests than their control mouse counterparts and increased theirGPR12proteins, those mice improved from 50% to 80%. That brought the low performance mice up to level similar to their control counterparts.[145] With the build up of prior work on mice such as testing the Formimidoyltransferase Cyclodeaminase (FTCD) gene in regards to the Morris water maze performance, testing out if there was a potential variation of genetic coding within the FTCD gene within humans was soon tested out. Results showed that a variation was found but varied depending on the age of the individual. In regards to the FTCD gene, it appeared that only children were affected by it. Working memory seemed to have a higher performance when the FTCD gene was present but had no similar affect to adults.[146] Working memory capacity is correlated with learning outcomes in literacy and numeracy. Initial evidence for this relation comes from the correlation between working-memory capacity and reading comprehension, as first observed by Daneman and Carpenter (1980)[147]and confirmed in a later meta-analytic review of several studies.[148]Subsequent work found that working memory performance in primary school children accurately predicted performance in mathematical problem solving.[149]One longitudinal study showed that a child's working memory at 5 years old is a better predictor of academic success than IQ.[150] A randomized controlled study of 580 children in Germany indicated that working memory training at age six had a significant positive effect in spatial working memory immediately after training, and that the effect gradually transferred to other areas, with significant and meaningful increases in reading comprehension, mathematics (geometry), and IQ (measured by Raven matrices). Additionally, a marked increase in ability to inhibit impulses was detected in the follow-up after one year, measured as a higher score in theGo-No Go task. Four years after the treatment, the effects persisted and was captured as a 16 percentage point higher acceptance rate to the academic track (German Gymnasium), as compared to the control group.[90] In a large-scale screening study, one in ten children in mainstream classrooms were identified with working memory deficits. The majority of them performed very poorly in academic achievements, independent of their IQ.[151]Similarly, working memory deficits have been identified in national curriculum low-achievers as young as seven years of age.[152]Without appropriate intervention, these children lag behind their peers. A recent study of 37 school-age children with significant learning disabilities has shown that working memory capacity at baseline measurement, but not IQ, predicts learning outcomes two years later.[153]This suggests that working memory impairments are associated with low learning outcomes and constitute a high risk factor for educational underachievement for children. In children with learning disabilities such asdyslexia,ADHD, and developmental coordination disorder, a similar pattern is evident.[154][155][156][157] There is some evidence that optimal working memory performance links to the neural ability to focus attention on task-relevant information and to ignore distractions,[158]and that practice-related improvement in working memory is due to increasing these abilities.[159]One line of research suggests a link between the working memory capacities of a person and their ability to control the orientation of attention to stimuli in the environment.[160]Such control enables people to attend to information important for their current goals, and to ignore goal-irrelevant stimuli that tend to capture their attention due to their sensorysaliency(such as an ambulance siren). The direction of attention according to one's goals is assumed to rely on "top-down" signals from the pre-frontal cortex (PFC) that biases processing inposterior cortical areas.[161]Capture of attention by salient stimuli is assumed to be driven by "bottom-up" signals from subcortical structures and the primary sensory cortices.[162]The ability to override "bottom-up" capture of attention differs between individuals, and this difference has been found to correlate with their performance in a working-memory test for visual information.[160]Another study, however, found no correlation between the ability to override attentional capture and measures of more general working-memory capacity.[163] An impairment of working memory functioning is normally seen in several neural disorders: Several authors[164]have proposed that symptoms ofADHDarise from a primary deficit in a specific executive function (EF) domain such as working memory, response inhibition or a more general weakness in executive control.[165]A meta-analytical review cites several studies that found significant lower group results for ADHD in spatial and verbal working memory tasks, and in several other EF tasks. However, the authors concluded that EF weaknesses neither are necessary nor sufficient to cause all cases of ADHD.[165] Severalneurotransmitters, such asdopamineandglutamatemay be involved in both ADHD and working memory. Both are associated with thefrontalbrain, self-direction and self-regulation, butcause–effecthave not been confirmed, so it is unclear whether working memory dysfunction leads to ADHD, or ADHD distractibility leads to poor functionality of working memory, or if there is some other connection.[166][167][168] Patients withParkinson'sshow signs of a reduced verbal function of working memory. They wanted to find if the reduction is due to a lack of ability to focus on relevant tasks, or a low amount of memory capacity. Twenty-one patients with Parkinson's were tested in comparison to the control group of 28 participants of the same age. The researchers found that both hypotheses were the reason working memory function is reduced which did not fully agree with their hypothesis that it is either one or the other.[169] AsAlzheimer's diseasebecomes more serious, less working memory functions. In addition to deficits inepisodic memory, Alzheimer's disease is associated with impairments in visual short-term memory, assessed using delayed reproduction tasks.[170][171][172]These investigations point to a deficit in visual feature binding as an important component of the deficit in Alzheimer's disease. There is one study that focuses on the neural connections and fluidity of working memory in mice brains. Half of the mice were given an injection that mimicked the effects of Alzheimer's, and the other half were not. Then the mice were expected to go through a maze that is a task to test working memory. The study helps answer questions about how Alzheimer's can deteriorate the working memory and ultimately obliterate memory functions.[173] A group of researchers hosted a study that researched the function and connectivity of working memory over a 30-month longitudinal experiment. It found that there were certain places in the brain where most connectivity was decreased in pre-Huntington diseasedpatients, in comparison to the control group that remained consistently functional.[174] A recent study by Li and colleagues showed evidence that the same brain regions responsible for working memory are also responsible for how much humans trust those memories. In the past, studies have shown that individuals can evaluate how much they trust their own memories, but how humans can do this was largely unknown. Using spatial memory tests andfMRI scans, they processed where and when the information was being stored and used this data to determinememory errors. They also asked the participants to express how uncertain they were about their memories. With both sets of information, the researchers could conclude that memory and the trust in that memory are stored within the same brain region.[175]
https://en.wikipedia.org/wiki/Working_memory
This is alist of file formatsused bycomputers, organized by type.Filename extensionis usually noted in parentheses if they differ from thefile format's name or abbreviation. Manyoperating systemsdo not limit filenames to one extension shorter than 4 characters, as was common with some operating systems that supported theFile Allocation Table(FAT) file system. Examples of operating systems that do not impose this limit includeUnix-likesystems, andMicrosoft WindowsNT,95-98, andMEwhich have no three character limit on extensions for32-bitor64-bitapplications onfile systemsother than pre-Windows 95 and Windows NT 3.5 versions of the FAT file system. Some filenames are given extensions longer than three characters. While MS-DOS and NT always treat the suffix after the last period in a file's name as its extension, in UNIX-like systems, the final period does not necessarily mean that the text after the last period is the file's extension.[1] Some file formats, such as.txtor.text, may be listed multiple times. Computer-aidedis a prefix for several categories of tools (e.g., design, manufacture, engineering) which assist professionals in their respective fields (e.g.,machining,architecture,schematics). Computer-aided design(CAD) software assists engineers, architects and other design professionals in project design. Electronic design automation(EDA), or electronic computer-aided design (ECAD), is specific to the field of electrical engineering. Files output fromAutomatic Test Equipmentor post-processed from such. These files storeformatted textandplain text. These file formats allow for the rapid creation of new binary file formats. Raster or bitmapfiles store images as a group ofpixels. Vector graphicsuse geometric primitives such as points, lines, curves, and polygons to represent images. 3D graphicsare 3D models that allow building models in real-time or non-real-time 3D rendering. Object extensions: Formats of files used for bibliographic information (citation) management. Molecular biology and bioinformatics: Authentication and general encryption formats are listed here. This section shows file formats for encrypted general data, rather than a specific program's data. Passwordfiles (sometimes called keychain files) contain lists of other passwords, usually encrypted. List of common file formats of data for video games on systems that support filesystems, most commonly PC games. These formats are used by the video gameosu!. These formats are used by the video gameMinecraft. Formats used by games based on theTrackManiaengine. Formats used by games based on theDoomengine. Formats used by games based on theQuakeengine. Formats used by games based on theUnrealengine. Formats used by games based on this engine. Formats used byDiabloby Blizzard Entertainment. Formats used byBohemia Interactive.Operation:Flashpoint,ARMA 2, VBS2 Formats used byValve.Half-Life 2,Counter-Strike: Source,Day of Defeat: Source,Half-Life 2: Episode One,Team Fortress 2,Half-Life 2: Episode Two,Portal,Left 4 Dead,Left 4 Dead 2,Alien Swarm,Portal 2,Counter-Strike: Global Offensive,Titanfall,Insurgency,Titanfall 2,Day of Infamy Formats used inMetal Gear Rising: Revengeance,Bayonetta,Vanquish (video game),Nier: Automata List of the most common filename extensions used when a game'sROM imageor storage medium is copied from an originalread-only memory(ROM) device to an external memory such ashard diskforback uppurposes or for making the game playable with anemulator. In the case of cartridge-based software, if the platform specific extension is not used then filename extensions ".rom" or ".bin" are usually used to clarify that the file contains a copy of a content of a ROM. ROM, disk or tape images usually do not consist of one file or ROM, rather an entire file or ROM structure contained within one file on the backup medium.[36] Static Dynamically generated These file formats are fairly well defined by long-term use or a general standard, but the content of each file is often highly specific to particular software or has been extended by further standards for specific uses. These are filename extensions and broad types reused frequently with differing formats or no specific format by different programs.
https://en.wikipedia.org/wiki/List_of_file_formats
Inmathematical statistics, theKullback–Leibler(KL)divergence(also calledrelative entropyandI-divergence[1]), denotedDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}, is a type ofstatistical distance: a measure of how much a modelprobability distributionQis different from a true probability distributionP.[2][3]Mathematically, it is defined as DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}.} A simpleinterpretationof the KL divergence ofPfromQis theexpectedexcesssurprisefrom usingQas a model instead ofPwhen the actual distribution isP. While it is a measure of how different two distributions are and is thus a distance in some sense, it is not actually ametric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast tovariation of information), and does not satisfy thetriangle inequality. Instead, in terms ofinformation geometry, it is a type ofdivergence,[4]a generalization ofsquared distance, and for certain classes of distributions (notably anexponential family), it satisfies a generalizedPythagorean theorem(which applies to squared distances).[5] Relative entropy is always a non-negativereal number, with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative(Shannon) entropyin information systems, randomness in continuoustime-series, and information gain when comparing statistical models ofinference; and practical, such as applied statistics,fluid mechanics,neuroscience,bioinformatics, andmachine learning. Consider two probability distributionsPandQ. Usually,Prepresents the data, the observations, or a measured probability distribution. DistributionQrepresents instead a theory, a model, a description or an approximation ofP. The Kullback–Leibler divergenceDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is then interpreted as the average difference of the number of bits required for encoding samples ofPusing a code optimized forQrather than one optimized forP. Note that the roles ofPandQcan be reversed in some situations where that is easier to compute, such as with theexpectation–maximization algorithm (EM)andevidence lower bound (ELBO)computations. The relative entropy was introduced bySolomon KullbackandRichard LeiblerinKullback & Leibler (1951)as "the mean information for discrimination betweenH1{\displaystyle H_{1}}andH2{\displaystyle H_{2}}per observation fromμ1{\displaystyle \mu _{1}}",[6]where one is comparing two probability measuresμ1,μ2{\displaystyle \mu _{1},\mu _{2}}, andH1,H2{\displaystyle H_{1},H_{2}}are the hypotheses that one is selecting from measureμ1,μ2{\displaystyle \mu _{1},\mu _{2}}(respectively). They denoted this byI(1:2){\displaystyle I(1:2)}, and defined the "'divergence' betweenμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}" as the symmetrized quantityJ(1,2)=I(1:2)+I(2:1){\displaystyle J(1,2)=I(1:2)+I(2:1)}, which had already been defined and used byHarold Jeffreysin 1948.[7]InKullback (1959), the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions;[8]Kullback preferred the termdiscrimination information.[9]The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality.[10]Numerous references to earlier uses of the symmetrized divergence and to otherstatistical distancesare given inKullback (1959, pp. 6–7, §1.3 Divergence). The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as theJeffreys divergence. Fordiscrete probability distributionsPandQdefined on the samesample space,X{\displaystyle {\mathcal {X}}},the relative entropy fromQtoPis defined[11]to be DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x),{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}\,,} which is equivalent to DKL(P∥Q)=−∑x∈XP(x)log⁡Q(x)P(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=-\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {Q(x)}{P(x)}}\,.} In other words, it is theexpectationof the logarithmic difference between the probabilitiesPandQ, where the expectation is taken using the probabilitiesP. Relative entropy is only defined in this way if, for allx,Q(x)=0{\displaystyle Q(x)=0}impliesP(x)=0{\displaystyle P(x)=0}(absolute continuity). Otherwise, it is often defined as+∞{\displaystyle +\infty },[1]but the value+∞{\displaystyle \ +\infty \ }is possible even ifQ(x)≠0{\displaystyle Q(x)\neq 0}everywhere,[12][13]provided thatX{\displaystyle {\mathcal {X}}}is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below. WheneverP(x){\displaystyle P(x)}is zero the contribution of the corresponding term is interpreted as zero because limx→0+xlog⁡(x)=0.{\displaystyle \lim _{x\to 0^{+}}x\,\log(x)=0\,.} For distributionsPandQof acontinuous random variable, relative entropy is defined to be the integral[14] DKL(P∥Q)=∫−∞∞p(x)log⁡p(x)q(x)dx,{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\,,} wherepandqdenote theprobability densitiesofPandQ. More generally, ifPandQare probabilitymeasureson ameasurable spaceX,{\displaystyle {\mathcal {X}}\,,}andPisabsolutely continuouswith respect toQ, then the relative entropy fromQtoPis defined as DKL(P∥Q)=∫x∈Xlog⁡P(dx)Q(dx)P(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}\log {\frac {P(dx)}{Q(dx)}}\,P(dx)\,,} whereP(dx)Q(dx){\displaystyle {\frac {P(dx)}{Q(dx)}}}is theRadon–Nikodym derivativeofPwith respect toQ, i.e. the uniqueQalmost everywhere defined functionronX{\displaystyle {\mathcal {X}}}such thatP(dx)=r(x)Q(dx){\displaystyle P(dx)=r(x)Q(dx)}which exists becausePis absolutely continuous with respect toQ. Also we assume the expression on the right-hand side exists. Equivalently (by thechain rule), this can be written as DKL(P∥Q)=∫x∈XP(dx)Q(dx)log⁡P(dx)Q(dx)Q(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}{\frac {P(dx)}{Q(dx)}}\ \log {\frac {P(dx)}{Q(dx)}}\ Q(dx)\,,} which is theentropyofPrelative toQ. Continuing in this case, ifμ{\displaystyle \mu }is any measure onX{\displaystyle {\mathcal {X}}}for which densitiespandqwithP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}exist (meaning thatPandQare both absolutely continuous with respect toμ{\displaystyle \mu }),then the relative entropy fromQtoPis given as DKL(P∥Q)=∫x∈Xp(x)log⁡p(x)q(x)μ(dx).{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}p(x)\,\log {\frac {p(x)}{q(x)}}\ \mu (dx)\,.} Note that such a measureμ{\displaystyle \mu }for which densities can be defined always exists, since one can takeμ=12(P+Q){\textstyle \mu ={\frac {1}{2}}\left(P+Q\right)}although in practice it will usually be one that applies in the context likecounting measurefor discrete distributions, orLebesgue measureor a convenient variant thereof likeGaussian measureor the uniform measure on thesphere,Haar measureon aLie groupetc. for continuous distributions. The logarithms in these formulae are usually taken tobase2 if information is measured in units ofbits, or to baseeif information is measured innats. Most formulas involving relative entropy hold regardless of the base of the logarithm. Various conventions exist for referring toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}in words. Often it is referred to as the divergencebetweenPandQ, but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence ofPfromQor as the divergencefromQtoP. This reflects theasymmetryinBayesian inference, which startsfromapriorQand updatestotheposteriorP. Another common way to refer toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is as the relative entropy ofPwith respect toQor theinformation gainfromPoverQ. Kullback[3]gives the following example (Table 2.1, Example 2.1). LetPandQbe the distributions shown in the table and figure.Pis the distribution on the left side of the figure, abinomial distributionwithN=2{\displaystyle N=2}andp=0.4{\displaystyle p=0.4}.Qis the distribution on the right side of the figure, adiscrete uniform distributionwith the three possible outcomesx=0,1,2(i.e.X={0,1,2}{\displaystyle {\mathcal {X}}=\{0,1,2\}}), each with probabilityp=1/3{\displaystyle p=1/3}. Relative entropiesDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}andDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}are calculated as follows. This example uses thenatural logwith basee, designatedlnto get results innats(seeunits of information): DKL(P∥Q)=∑x∈XP(x)ln⁡P(x)Q(x)=925ln⁡9/251/3+1225ln⁡12/251/3+425ln⁡4/251/3=125(32ln⁡2+55ln⁡3−50ln⁡5)≈0.0852996,{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}P(x)\,\ln {\frac {P(x)}{Q(x)}}\\&={\frac {9}{25}}\ln {\frac {9/25}{1/3}}+{\frac {12}{25}}\ln {\frac {12/25}{1/3}}+{\frac {4}{25}}\ln {\frac {4/25}{1/3}}\\&={\frac {1}{25}}\left(32\ln 2+55\ln 3-50\ln 5\right)\\&\approx 0.0852996,\end{aligned}}} DKL(Q∥P)=∑x∈XQ(x)ln⁡Q(x)P(x)=13ln⁡1/39/25+13ln⁡1/312/25+13ln⁡1/34/25=13(−4ln⁡2−6ln⁡3+6ln⁡5)≈0.097455.{\displaystyle {\begin{aligned}D_{\text{KL}}(Q\parallel P)&=\sum _{x\in {\mathcal {X}}}Q(x)\,\ln {\frac {Q(x)}{P(x)}}\\&={\frac {1}{3}}\,\ln {\frac {1/3}{9/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{12/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{4/25}}\\&={\frac {1}{3}}\left(-4\ln 2-6\ln 3+6\ln 5\right)\\&\approx 0.097455.\end{aligned}}} In the field of statistics, theNeyman–Pearson lemmastates that the most powerful way to distinguish between the two distributionsPandQbased on an observationY(drawn from one of them) is through the log of the ratio of their likelihoods:log⁡P(Y)−log⁡Q(Y){\displaystyle \log P(Y)-\log Q(Y)}. The KL divergence is the expected value of this statistic ifYis actually drawn fromP. Kullback motivated the statistic as an expected log likelihood ratio.[15] In the context ofcoding theory,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be constructed by measuring the expected number of extrabitsrequired tocodesamples fromPusing a code optimized forQrather than the code optimized forP. In the context ofmachine learning,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is often called theinformation gainachieved ifPwould be used instead ofQwhich is currently used. By analogy with information theory, it is called therelative entropyofPwith respect toQ. Expressed in the language ofBayesian inference,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is a measure of the information gained by revising one's beliefs from theprior probability distributionQto theposterior probability distributionP. In other words, it is the amount of information lost whenQis used to approximateP.[16] In applications,Ptypically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, whileQtypically represents a theory, model, description, orapproximationofP. In order to find a distributionQthat is closest toP, we can minimize the KL divergence and compute aninformation projection. While it is astatistical distance, it is not ametric, the most familiar type of distance, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and the asymmetry is an important part of the geometry.[4]Theinfinitesimalform of relative entropy, specifically itsHessian, gives ametric tensorthat equals theFisher information metric; see§ Fisher information metric. Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms.[17]Its quantum version is Fubini-study metric.[18]Relative entropy satisfies a generalized Pythagorean theorem forexponential families(geometrically interpreted asdually flat manifolds), and this allows one to minimize relative entropy by geometric means, for example byinformation projectionand inmaximum likelihood estimation.[5] The relative entropy is theBregman divergencegenerated by the negative entropy, but it is also of the form of anf-divergence. For probabilities over a finitealphabet, it is unique in being a member of both of these classes ofstatistical divergences. The application of Bregman divergence can be found in mirror descent.[19] Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes (e.g. a “horse race” in which the official odds add up to one). The rate of return expected by such an investor is equal to the relative entropy between the investor's believed probabilities and the official odds.[20]This is a special case of a much more general connection between financial returns and divergence measures.[21] Financial risks are connected toDKL{\displaystyle D_{\text{KL}}}via information geometry.[22]Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example.[23] In information theory, theKraft–McMillan theoremestablishes that any directly decodable coding scheme for coding a message to identify one valuexi{\displaystyle x_{i}}out of a set of possibilitiesXcan be seen as representing an implicit probability distributionq(xi)=2−ℓi{\displaystyle q(x_{i})=2^{-\ell _{i}}}overX, whereℓi{\displaystyle \ell _{i}}is the length of the code forxi{\displaystyle x_{i}}in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distributionQis used, compared to using a code based on the true distributionP: it is theexcessentropy. DKL(P∥Q)=∑x∈Xp(x)log⁡1q(x)−∑x∈Xp(x)log⁡1p(x)=H(P,Q)−H(P){\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{q(x)}}-\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{p(x)}}\\[5pt]&=\mathrm {H} (P,Q)-\mathrm {H} (P)\end{aligned}}} whereH(P,Q){\displaystyle \mathrm {H} (P,Q)}is thecross entropyofQrelative toPandH(P){\displaystyle \mathrm {H} (P)}is theentropyofP(which is the same as the cross-entropy of P with itself). The relative entropyDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be thought of geometrically as astatistical distance, a measure of how far the distributionQis from the distributionP. Geometrically it is adivergence: an asymmetric, generalized form of squared distance. The cross-entropyH(P,Q){\displaystyle H(P,Q)}is itself such a measurement (formally aloss function), but it cannot be thought of as a distance, sinceH(P,P)=:H(P){\displaystyle H(P,P)=:H(P)}is not zero. This can be fixed by subtractingH(P){\displaystyle H(P)}to makeDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}agree more closely with our notion of distance, as theexcessloss. The resulting function is asymmetric, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetric form is more useful. See§ Interpretationsfor more on the geometric interpretation. Relative entropy relates to "rate function" in the theory oflarge deviations.[24][25] Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly usedcharacterization of entropy.[26]Consequently,mutual informationis the only measure of mutual dependence that obeys certain related conditions, since it can be definedin terms of Kullback–Leibler divergence. In particular, ifP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}, thenp(x)=q(x){\displaystyle p(x)=q(x)}μ{\displaystyle \mu }-almost everywhere. The entropyH(P){\displaystyle \mathrm {H} (P)}thus sets a minimum value for the cross-entropyH(P,Q){\displaystyle \mathrm {H} (P,Q)}, theexpectednumber ofbitsrequired when using a code based onQrather thanP; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a valuexdrawn fromX, if a code is used corresponding to the probability distributionQ, rather than the "true" distributionP. Denotef(α):=DKL((1−α)Q+αP∥Q){\displaystyle f(\alpha ):=D_{\text{KL}}((1-\alpha )Q+\alpha P\parallel Q)}and note thatDKL(P∥Q)=f(1){\displaystyle D_{\text{KL}}(P\parallel Q)=f(1)}. The first derivative off{\displaystyle f}may be derived and evaluated as followsf′(α)=∑x∈X(P(x)−Q(x))(log⁡((1−α)Q(x)+αP(x)Q(x))+1)=∑x∈X(P(x)−Q(x))log⁡((1−α)Q(x)+αP(x)Q(x))f′(0)=0{\displaystyle {\begin{aligned}f'(\alpha )&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\left(\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)+1\right)\\&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)\\f'(0)&=0\end{aligned}}}Further derivatives may be derived and evaluated as followsf″(α)=∑x∈X(P(x)−Q(x))2(1−α)Q(x)+αP(x)f″(0)=∑x∈X(P(x)−Q(x))2Q(x)f(n)(α)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))n((1−α)Q(x)+αP(x))n−1f(n)(0)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))nQ(x)n−1{\displaystyle {\begin{aligned}f''(\alpha )&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{(1-\alpha )Q(x)+\alpha P(x)}}\\f''(0)&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{Q(x)}}\\f^{(n)}(\alpha )&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{\left((1-\alpha )Q(x)+\alpha P(x)\right)^{n-1}}}\\f^{(n)}(0)&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}Hence solving forDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}via the Taylor expansion off{\displaystyle f}about0{\displaystyle 0}evaluated atα=1{\displaystyle \alpha =1}yieldsDKL(P∥Q)=∑n=0∞f(n)(0)n!=∑n=2∞1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}\\&=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument∑n=2∞|1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1|=∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)||1−P(x)Q(x)|n−1≤∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)|≤∑n=2∞1n(n−1)=1{\displaystyle {\begin{aligned}\sum _{n=2}^{\infty }\left\vert {\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\right\vert &=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \left\vert 1-{\frac {P(x)}{Q(x)}}\right\vert ^{n-1}\\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\\&=1\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume thatP>2Q{\displaystyle P>2Q}with measure strictly greater than0{\displaystyle 0}. It then follows that there must exist some valuesε>0{\displaystyle \varepsilon >0},ρ>0{\displaystyle \rho >0}, andU<∞{\displaystyle U<\infty }such thatP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }andQ≤U{\displaystyle Q\leq U}with measureρ{\displaystyle \rho }. The previous proof of sufficiency demonstrated that the measure1−ρ{\displaystyle 1-\rho }component of the series whereP≤2Q{\displaystyle P\leq 2Q}is bounded, so we need only concern ourselves with the behavior of the measureρ{\displaystyle \rho }component of the series whereP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }. The absolute value of then{\displaystyle n}th term of this component of the series is then lower bounded by1n(n−1)ρ(1+εU)n{\displaystyle {\frac {1}{n(n-1)}}\rho \left(1+{\frac {\varepsilon }{U}}\right)^{n}}, which is unbounded asn→∞{\displaystyle n\to \infty }, so the series diverges. The following result, due to Donsker and Varadhan,[29]is known asDonsker and Varadhan's variational formula. Theorem [Duality Formula for Variational Inference]—LetΘ{\displaystyle \Theta }be a set endowed with an appropriateσ{\displaystyle \sigma }-fieldF{\displaystyle {\mathcal {F}}}, and two probability measuresPandQ, which formulate twoprobability spaces(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}and(Θ,F,Q){\displaystyle (\Theta ,{\mathcal {F}},Q)}, withQ≪P{\displaystyle Q\ll P}. (Q≪P{\displaystyle Q\ll P}indicates thatQis absolutely continuous with respect toP.) Lethbe a real-valued integrablerandom variableon(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}. Then the following equality holds log⁡EP[exp⁡h]=supQ≪P⁡{EQ[h]−DKL(Q∥P)}.{\displaystyle \log E_{P}[\exp h]=\operatorname {sup} _{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds Q(dθ)P(dθ)=exp⁡h(θ)EP[exp⁡h],{\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measureP, whereQ(dθ)P(dθ){\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}}denotes the Radon-Nikodym derivative ofQwith respect toP. For a short proof assuming integrability ofexp⁡(h){\displaystyle \exp(h)}with respect toP, letQ∗{\displaystyle Q^{*}}haveP-densityexp⁡h(θ)EP[exp⁡h]{\displaystyle {\frac {\exp h(\theta )}{E_{P}[\exp h]}}}, i.e.Q∗(dθ)=exp⁡h(θ)EP[exp⁡h]P(dθ){\displaystyle Q^{*}(d\theta )={\frac {\exp h(\theta )}{E_{P}[\exp h]}}P(d\theta )}Then DKL(Q∥Q∗)−DKL(Q∥P)=−EQ[h]+log⁡EP[exp⁡h].{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})-D_{\text{KL}}(Q\parallel P)=-E_{Q}[h]+\log E_{P}[\exp h].} Therefore, EQ[h]−DKL(Q∥P)=log⁡EP[exp⁡h]−DKL(Q∥Q∗)≤log⁡EP[exp⁡h],{\displaystyle E_{Q}[h]-D_{\text{KL}}(Q\parallel P)=\log E_{P}[\exp h]-D_{\text{KL}}(Q\parallel Q^{*})\leq \log E_{P}[\exp h],} where the last inequality follows fromDKL(Q∥Q∗)≥0{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})\geq 0}, for which equality occurs if and only ifQ=Q∗{\displaystyle Q=Q^{*}}. The conclusion follows. Suppose that we have twomultivariate normal distributions, with meansμ0,μ1{\displaystyle \mu _{0},\mu _{1}}and with (non-singular)covariance matricesΣ0,Σ1.{\displaystyle \Sigma _{0},\Sigma _{1}.}If the two distributions have the same dimension,k, then the relative entropy between the distributions is as follows:[30] DKL(N0∥N1)=12[tr⁡(Σ1−1Σ0)−k+(μ1−μ0)TΣ1−1(μ1−μ0)+ln⁡detΣ1detΣ0].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left[\operatorname {tr} \left(\Sigma _{1}^{-1}\Sigma _{0}\right)-k+\left(\mu _{1}-\mu _{0}\right)^{\mathsf {T}}\Sigma _{1}^{-1}\left(\mu _{1}-\mu _{0}\right)+\ln {\frac {\det \Sigma _{1}}{\det \Sigma _{0}}}\right].} Thelogarithmin the last term must be taken to baseesince all terms apart from the last are base-elogarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured innats. Dividing the entire expression above byln⁡(2){\displaystyle \ln(2)}yields the divergence inbits. In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositionsL0,L1{\displaystyle L_{0},L_{1}}such thatΣ0=L0L0T{\displaystyle \Sigma _{0}=L_{0}L_{0}^{T}}andΣ1=L1L1T{\displaystyle \Sigma _{1}=L_{1}L_{1}^{T}}. Then withMandysolutions to the triangular linear systemsL1M=L0{\displaystyle L_{1}M=L_{0}}, andL1y=μ1−μ0{\displaystyle L_{1}y=\mu _{1}-\mu _{0}}, DKL(N0∥N1)=12(∑i,j=1k(Mij)2−k+|y|2+2∑i=1kln⁡(L1)ii(L0)ii).{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left(\sum _{i,j=1}^{k}{\left(M_{ij}\right)}^{2}-k+|y|^{2}+2\sum _{i=1}^{k}\ln {\frac {(L_{1})_{ii}}{(L_{0})_{ii}}}\right).} A special case, and a common quantity invariational inference, is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance): DKL(N((μ1,…,μk)T,diag⁡(σ12,…,σk2))∥N(0,I))=12∑i=1k[σi2+μi2−1−ln⁡(σi2)].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}\left(\left(\mu _{1},\ldots ,\mu _{k}\right)^{\mathsf {T}},\operatorname {diag} \left(\sigma _{1}^{2},\ldots ,\sigma _{k}^{2}\right)\right)\parallel {\mathcal {N}}\left(\mathbf {0} ,\mathbf {I} \right)\right)={\frac {1}{2}}\sum _{i=1}^{k}\left[\sigma _{i}^{2}+\mu _{i}^{2}-1-\ln \left(\sigma _{i}^{2}\right)\right].} For two univariate normal distributionspandqthe above simplifies to[31]DKL(p∥q)=log⁡σ1σ0+σ02+(μ0−μ1)22σ12−12{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {\sigma _{1}}{\sigma _{0}}}+{\frac {\sigma _{0}^{2}+{\left(\mu _{0}-\mu _{1}\right)}^{2}}{2\sigma _{1}^{2}}}-{\frac {1}{2}}} In the case of co-centered normal distributions withk=σ1/σ0{\displaystyle k=\sigma _{1}/\sigma _{0}}, this simplifies[32]to: DKL(p∥q)=log2⁡k+(k−2−1)/2/ln⁡(2)bits{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log _{2}k+(k^{-2}-1)/2/\ln(2)\mathrm {bits} } Consider two uniform distributions, with the support ofp=[A,B]{\displaystyle p=[A,B]}enclosed withinq=[C,D]{\displaystyle q=[C,D]}(C≤A<B≤D{\displaystyle C\leq A<B\leq D}). Then the information gain is: DKL(p∥q)=log⁡D−CB−A{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {D-C}{B-A}}} Intuitively,[32]the information gain to aktimes narrower uniform distribution containslog2⁡k{\displaystyle \log _{2}k}bits. This connects with the use of bits in computing, wherelog2⁡k{\displaystyle \log _{2}k}bits would be needed to identify one element of aklong stream. Theexponential familyof distribution is given by pX(x|θ)=h(x)exp⁡(θTT(x)−A(θ)){\displaystyle p_{X}(x|\theta )=h(x)\exp \left(\theta ^{\mathsf {T}}T(x)-A(\theta )\right)} whereh(x){\displaystyle h(x)}is reference measure,T(x){\displaystyle T(x)}is sufficient statistics,θ{\displaystyle \theta }is canonical natural parameters, andA(θ){\displaystyle A(\theta )}is the log-partition function. The KL divergence between two distributionsp(x|θ1){\displaystyle p(x|\theta _{1})}andp(x|θ2){\displaystyle p(x|\theta _{2})}is given by[33] DKL(θ1∥θ2)=(θ1−θ2)Tμ1−A(θ1)+A(θ2){\displaystyle D_{\text{KL}}(\theta _{1}\parallel \theta _{2})={\left(\theta _{1}-\theta _{2}\right)}^{\mathsf {T}}\mu _{1}-A(\theta _{1})+A(\theta _{2})} whereμ1=Eθ1[T(X)]=∇A(θ1){\displaystyle \mu _{1}=E_{\theta _{1}}[T(X)]=\nabla A(\theta _{1})}is the mean parameter ofp(x|θ1){\displaystyle p(x|\theta _{1})}. For example, for the Poisson distribution with meanλ{\displaystyle \lambda }, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=log⁡λ{\displaystyle \theta =\log \lambda }, and log partition functionA(θ)=eθ{\displaystyle A(\theta )=e^{\theta }}. As such, the divergence between two Poisson distributions with meansλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}is DKL(λ1∥λ2)=λ1log⁡λ1λ2−λ1+λ2.{\displaystyle D_{\text{KL}}(\lambda _{1}\parallel \lambda _{2})=\lambda _{1}\log {\frac {\lambda _{1}}{\lambda _{2}}}-\lambda _{1}+\lambda _{2}.} As another example, for a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=μ{\displaystyle \theta =\mu }, and log partition functionA(θ)=μ2/2{\displaystyle A(\theta )=\mu ^{2}/2}. Thus, the divergence between two normal distributionsN(μ1,1){\displaystyle N(\mu _{1},1)}andN(μ2,1){\displaystyle N(\mu _{2},1)}is DKL(μ1∥μ2)=(μ1−μ2)μ1−μ122+μ222=(μ2−μ1)22.{\displaystyle D_{\text{KL}}(\mu _{1}\parallel \mu _{2})=\left(\mu _{1}-\mu _{2}\right)\mu _{1}-{\frac {\mu _{1}^{2}}{2}}+{\frac {\mu _{2}^{2}}{2}}={\frac {{\left(\mu _{2}-\mu _{1}\right)}^{2}}{2}}.} As final example, the divergence between a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}and a Poisson distribution with meanλ{\displaystyle \lambda }is DKL(μ∥λ)=(μ−log⁡λ)μ−μ22+λ.{\displaystyle D_{\text{KL}}(\mu \parallel \lambda )=(\mu -\log \lambda )\mu -{\frac {\mu ^{2}}{2}}+\lambda .} While relative entropy is astatistical distance, it is not ametricon the space of probability distributions, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric in general and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetry is an important part of the geometry.[4] It generates atopologyon the space ofprobability distributions. More concretely, if{P1,P2,…}{\displaystyle \{P_{1},P_{2},\ldots \}}is a sequence of distributions such that limn→∞DKL(Pn∥Q)=0,{\displaystyle \lim _{n\to \infty }D_{\text{KL}}(P_{n}\parallel Q)=0,} then it is said that Pn→DQ.{\displaystyle P_{n}\xrightarrow {D} \,Q.} Pinsker's inequalityentails that Pn→DP⇒Pn→TVP,{\displaystyle P_{n}\xrightarrow {D} P\Rightarrow P_{n}\xrightarrow {TV} P,} where the latter stands for the usual convergence intotal variation. Relative entropy is directly related to theFisher information metric. This can be made explicit as follows. Assume that the probability distributionsPandQare both parameterized by some (possibly multi-dimensional) parameterθ{\displaystyle \theta }. Consider then two close by values ofP=P(θ){\displaystyle P=P(\theta )}andQ=P(θ0){\displaystyle Q=P(\theta _{0})}so that the parameterθ{\displaystyle \theta }differs by only a small amount from the parameter valueθ0{\displaystyle \theta _{0}}. Specifically, up to first order one has (using theEinstein summation convention)P(θ)=P(θ0)+ΔθjPj(θ0)+⋯{\displaystyle P(\theta )=P(\theta _{0})+\Delta \theta _{j}\,P_{j}(\theta _{0})+\cdots } withΔθj=(θ−θ0)j{\displaystyle \Delta \theta _{j}=(\theta -\theta _{0})_{j}}a small change ofθ{\displaystyle \theta }in thejdirection, andPj(θ0)=∂P∂θj(θ0){\displaystyle P_{j}\left(\theta _{0}\right)={\frac {\partial P}{\partial \theta _{j}}}(\theta _{0})}the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 forP=Q{\displaystyle P=Q}, i.e.θ=θ0{\displaystyle \theta =\theta _{0}}, it changes only tosecondorder in the small parametersΔθj{\displaystyle \Delta \theta _{j}}. More formally, as for any minimum, the first derivatives of the divergence vanish ∂∂θj|θ=θ0DKL(P(θ)∥P(θ0))=0,{\displaystyle \left.{\frac {\partial }{\partial \theta _{j}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))=0,} and by theTaylor expansionone has up to second order DKL(P(θ)∥P(θ0))=12ΔθjΔθkgjk(θ0)+⋯{\displaystyle D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))={\frac {1}{2}}\,\Delta \theta _{j}\,\Delta \theta _{k}\,g_{jk}(\theta _{0})+\cdots } where theHessian matrixof the divergence gjk(θ0)=∂2∂θj∂θk|θ=θ0DKL(P(θ)∥P(θ0)){\displaystyle g_{jk}(\theta _{0})=\left.{\frac {\partial ^{2}}{\partial \theta _{j}\,\partial \theta _{k}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))} must bepositive semidefinite. Lettingθ0{\displaystyle \theta _{0}}vary (and dropping the subindex 0) the Hessiangjk(θ){\displaystyle g_{jk}(\theta )}defines a (possibly degenerate)Riemannian metricon theθparameter space, called the Fisher information metric. Whenp(x,ρ){\displaystyle p_{(x,\rho )}}satisfies the following regularity conditions: ∂log⁡(p)∂ρ,∂2log⁡(p)∂ρ2,∂3log⁡(p)∂ρ3{\displaystyle {\frac {\partial \log(p)}{\partial \rho }},{\frac {\partial ^{2}\log(p)}{\partial \rho ^{2}}},{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}}exist,|∂p∂ρ|<F(x):∫x=0∞F(x)dx<∞,|∂2p∂ρ2|<G(x):∫x=0∞G(x)dx<∞|∂3log⁡(p)∂ρ3|<H(x):∫x=0∞p(x,0)H(x)dx<ξ<∞{\displaystyle {\begin{aligned}\left|{\frac {\partial p}{\partial \rho }}\right|&<F(x):\int _{x=0}^{\infty }F(x)\,dx<\infty ,\\\left|{\frac {\partial ^{2}p}{\partial \rho ^{2}}}\right|&<G(x):\int _{x=0}^{\infty }G(x)\,dx<\infty \\\left|{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}\right|&<H(x):\int _{x=0}^{\infty }p(x,0)H(x)\,dx<\xi <\infty \end{aligned}}} whereξis independent ofρ∫x=0∞∂p(x,ρ)∂ρ|ρ=0dx=∫x=0∞∂2p(x,ρ)∂ρ2|ρ=0dx=0{\displaystyle \left.\int _{x=0}^{\infty }{\frac {\partial p(x,\rho )}{\partial \rho }}\right|_{\rho =0}\,dx=\left.\int _{x=0}^{\infty }{\frac {\partial ^{2}p(x,\rho )}{\partial \rho ^{2}}}\right|_{\rho =0}\,dx=0} then:D(p(x,0)∥p(x,ρ))=cρ22+O(ρ3)asρ→0.{\displaystyle {\mathcal {D}}(p(x,0)\parallel p(x,\rho ))={\frac {c\rho ^{2}}{2}}+{\mathcal {O}}\left(\rho ^{3}\right){\text{ as }}\rho \to 0.} Another information-theoretic metric isvariation of information, which is roughly a symmetrization ofconditional entropy. It is a metric on the set ofpartitionsof a discreteprobability space. MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback–Leibler divergences between the two distributions in a quantized embedding space of a foundation model. Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases. Theself-information, also known as theinformation contentof a signal, random variable, oreventis defined as the negative logarithm of theprobabilityof the given outcome occurring. When applied to adiscrete random variable, the self-information can be represented as[citation needed] I⁡(m)=DKL(δim∥{pi}),{\displaystyle \operatorname {\operatorname {I} } (m)=D_{\text{KL}}\left(\delta _{\text{im}}\parallel \{p_{i}\}\right),} is the relative entropy of the probability distributionP(i){\displaystyle P(i)}from aKronecker deltarepresenting certainty thati=m{\displaystyle i=m}— i.e. the number of extra bits that must be transmitted to identifyiif only the probability distributionP(i){\displaystyle P(i)}is available to the receiver, not the fact thati=m{\displaystyle i=m}. Themutual information, I⁡(X;Y)=DKL(P(X,Y)∥P(X)P(Y))=EX⁡{DKL(P(Y∣X)∥P(Y))}=EY⁡{DKL(P(X∣Y)∥P(X))}{\displaystyle {\begin{aligned}\operatorname {I} (X;Y)&=D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))\\[5pt]&=\operatorname {E} _{X}\{D_{\text{KL}}(P(Y\mid X)\parallel P(Y))\}\\[5pt]&=\operatorname {E} _{Y}\{D_{\text{KL}}(P(X\mid Y)\parallel P(X))\}\end{aligned}}} is the relative entropy of thejoint probability distributionP(X,Y){\displaystyle P(X,Y)}from the productP(X)P(Y){\displaystyle P(X)P(Y)}of the twomarginal probability distributions— i.e. the expected number of extra bits that must be transmitted to identifyXandYif they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probabilityP(X,Y){\displaystyle P(X,Y)}isknown, it is the expected number of extra bits that must on average be sent to identifyYif the value ofXis not already known to the receiver. TheShannon entropy, H(X)=E⁡[IX⁡(x)]=log⁡N−DKL(pX(x)∥PU(X)){\displaystyle {\begin{aligned}\mathrm {H} (X)&=\operatorname {E} \left[\operatorname {I} _{X}(x)\right]\\&=\log N-D_{\text{KL}}{\left(p_{X}(x)\parallel P_{U}(X)\right)}\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the uniform distribution on therandom variatesofX,PU(X){\displaystyle P_{U}(X)}, from the true distributionP(X){\displaystyle P(X)}— i.e.lessthe expected number of bits saved, which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the true distributionP(X){\displaystyle P(X)}. This definition of Shannon entropy forms the basis ofE.T. Jaynes's alternative generalization to continuous distributions, thelimiting density of discrete points(as opposed to the usualdifferential entropy), which defines the continuous entropy aslimN→∞HN(X)=log⁡N−∫p(x)log⁡p(x)m(x)dx,{\displaystyle \lim _{N\to \infty }H_{N}(X)=\log N-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx,}which is equivalent to:log⁡(N)−DKL(p(x)||m(x)){\displaystyle \log(N)-D_{\text{KL}}(p(x)||m(x))} Theconditional entropy[34], H(X∣Y)=log⁡N−DKL(P(X,Y)∥PU(X)P(Y))=log⁡N−DKL(P(X,Y)∥P(X)P(Y))−DKL(P(X)∥PU(X))=H(X)−I⁡(X;Y)=log⁡N−EY⁡[DKL(P(X∣Y)∥PU(X))]{\displaystyle {\begin{aligned}\mathrm {H} (X\mid Y)&=\log N-D_{\text{KL}}(P(X,Y)\parallel P_{U}(X)P(Y))\\[5pt]&=\log N-D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))-D_{\text{KL}}(P(X)\parallel P_{U}(X))\\[5pt]&=\mathrm {H} (X)-\operatorname {I} (X;Y)\\[5pt]&=\log N-\operatorname {E} _{Y}\left[D_{\text{KL}}\left(P\left(X\mid Y\right)\parallel P_{U}(X)\right)\right]\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the product distributionPU(X)P(Y){\displaystyle P_{U}(X)P(Y)}from the true joint distributionP(X,Y){\displaystyle P(X,Y)}— i.e.lessthe expected number of bits saved which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the conditional distributionP(X|Y){\displaystyle P(X|Y)}ofXgivenY. When we have a set of possible events, coming from the distributionp, we can encode them (with alossless data compression) usingentropy encoding. This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length,prefix-free code(e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distributionpin advance, we can devise an encoding that would be optimal (e.g.: usingHuffman coding). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled fromp), which will be equal toShannon's Entropyofp(denoted asH(p){\displaystyle \mathrm {H} (p)}). However, if we use a different probability distribution (q) when creating the entropy encoding scheme, then a larger number ofbitswill be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by thecross entropybetweenpandq. Thecross entropybetween twoprobability distributions(pandq) measures the average number ofbitsneeded to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distributionq, rather than the "true" distributionp. The cross entropy for two distributionspandqover the sameprobability spaceis thus defined as follows. H(p,q)=Ep⁡[−log⁡q]=H(p)+DKL(p∥q).{\displaystyle \mathrm {H} (p,q)=\operatorname {E} _{p}[-\log q]=\mathrm {H} (p)+D_{\text{KL}}(p\parallel q).} For explicit derivation of this, see theMotivationsection above. Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyondH(p){\displaystyle \mathrm {H} (p)}) for encoding the events because of usingqfor constructing the encoding scheme instead ofp. InBayesian statistics, relative entropy can be used as a measure of the information gain in moving from aprior distributionto aposterior distribution:p(x)→p(x∣I){\displaystyle p(x)\to p(x\mid I)}. If some new factY=y{\displaystyle Y=y}is discovered, it can be used to update the posterior distribution forXfromp(x∣I){\displaystyle p(x\mid I)}to a new posterior distributionp(x∣y,I){\displaystyle p(x\mid y,I)}usingBayes' theorem: p(x∣y,I)=p(y∣x,I)p(x∣I)p(y∣I){\displaystyle p(x\mid y,I)={\frac {p(y\mid x,I)p(x\mid I)}{p(y\mid I)}}} This distribution has a newentropy: H(p(x∣y,I))=−∑xp(x∣y,I)log⁡p(x∣y,I),{\displaystyle \mathrm {H} {\big (}p(x\mid y,I){\big )}=-\sum _{x}p(x\mid y,I)\log p(x\mid y,I),} which may be less than or greater than the original entropyH(p(x∣I)){\displaystyle \mathrm {H} (p(x\mid I))}. However, from the standpoint of the new probability distribution one can estimate that to have used the original code based onp(x∣I){\displaystyle p(x\mid I)}instead of a new code based onp(x∣y,I){\displaystyle p(x\mid y,I)}would have added an expected number of bits: DKL(p(x∣y,I)∥p(x∣I))=∑xp(x∣y,I)log⁡p(x∣y,I)p(x∣I){\displaystyle D_{\text{KL}}{\big (}p(x\mid y,I)\parallel p(x\mid I){\big )}=\sum _{x}p(x\mid y,I)\log {\frac {p(x\mid y,I)}{p(x\mid I)}}} to the message length. This therefore represents the amount of useful information, or information gain, aboutX, that has been learned by discoveringY=y{\displaystyle Y=y}. If a further piece of data,Y2=y2{\displaystyle Y_{2}=y_{2}}, subsequently comes in, the probability distribution forxcan be updated further, to give a new best guessp(x∣y1,y2,I){\displaystyle p(x\mid y_{1},y_{2},I)}. If one reinvestigates the information gain for usingp(x∣y1,I){\displaystyle p(x\mid y_{1},I)}rather thanp(x∣I){\displaystyle p(x\mid I)}, it turns out that it may be either greater or less than previously estimated: ∑xp(x∣y1,y2,I)log⁡p(x∣y1,y2,I)p(x∣I){\displaystyle \sum _{x}p(x\mid y_{1},y_{2},I)\log {\frac {p(x\mid y_{1},y_{2},I)}{p(x\mid I)}}}may be ≤ or > than∑xp(x∣y1,I)log⁡p(x∣y1,I)p(x∣I){\textstyle \sum _{x}p(x\mid y_{1},I)\log {\frac {p(x\mid y_{1},I)}{p(x\mid I)}}} and so the combined information gain doesnotobey the triangle inequality: DKL(p(x∣y1,y2,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid I){\big )}}may be <, = or > thanDKL(p(x∣y1,y2,I)∥p(x∣y1,I))+DKL(p(x∣y1,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid y_{1},I){\big )}+D_{\text{KL}}{\big (}p(x\mid y_{1},I)\parallel p(x\mid I){\big )}} All one can say is that onaverage, averaging usingp(y2∣y1,x,I){\displaystyle p(y_{2}\mid y_{1},x,I)}, the two sides will average out. A common goal inBayesian experimental designis to maximise the expected relative entropy between the prior and the posterior.[35]When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is calledBayes d-optimal. Relative entropyDKL(p(x∣H1)∥p(x∣H0)){\textstyle D_{\text{KL}}{\bigl (}p(x\mid H_{1})\parallel p(x\mid H_{0}){\bigr )}}can also be interpreted as the expecteddiscrimination informationforH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}: the mean information per sample for discriminating in favor of a hypothesisH1{\displaystyle H_{1}}against a hypothesisH0{\displaystyle H_{0}}, when hypothesisH1{\displaystyle H_{1}}is true.[36]Another name for this quantity, given to it byI. J. Good, is the expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}to be expected from each sample. The expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}isnotthe same as the information gain expected per sample about the probability distributionp(H){\displaystyle p(H)}of the hypotheses, DKL(p(x∣H1)∥p(x∣H0))≠IG=DKL(p(H∣x)∥p(H∣I)).{\displaystyle D_{\text{KL}}(p(x\mid H_{1})\parallel p(x\mid H_{0}))\neq IG=D_{\text{KL}}(p(H\mid x)\parallel p(H\mid I)).} Either of the two quantities can be used as autility functionin Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies. On the entropy scale ofinformation gainthere is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on thelogitscale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, theRiemann hypothesisis correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales ofloss functionfor uncertainty arebothuseful, according to how well each reflects the particular circumstances of the problem in question. The idea of relative entropy as discrimination information led Kullback to propose the Principle ofMinimum Discrimination Information(MDI): given new facts, a new distributionfshould be chosen which is as hard to discriminate from the original distributionf0{\displaystyle f_{0}}as possible; so that the new data produces as small an information gainDKL(f∥f0){\displaystyle D_{\text{KL}}(f\parallel f_{0})}as possible. For example, if one had a prior distributionp(x,a){\displaystyle p(x,a)}overxanda, and subsequently learnt the true distribution ofawasu(a){\displaystyle u(a)}, then the relative entropy between the new joint distribution forxanda,q(x∣a)u(a){\displaystyle q(x\mid a)u(a)}, and the earlier prior distribution would be: DKL(q(x∣a)u(a)∥p(x,a))=Eu(a)⁡{DKL(q(x∣a)∥p(x∣a))}+DKL(u(a)∥p(a)),{\displaystyle D_{\text{KL}}(q(x\mid a)u(a)\parallel p(x,a))=\operatorname {E} _{u(a)}\left\{D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))\right\}+D_{\text{KL}}(u(a)\parallel p(a)),} i.e. the sum of the relative entropy ofp(a){\displaystyle p(a)}the prior distribution forafrom the updated distributionu(a){\displaystyle u(a)}, plus the expected value (using the probability distributionu(a){\displaystyle u(a)}) of the relative entropy of the prior conditional distributionp(x∣a){\displaystyle p(x\mid a)}from the new conditional distributionq(x∣a){\displaystyle q(x\mid a)}. (Note that often the later expected value is called theconditional relative entropy(orconditional Kullback–Leibler divergence) and denoted byDKL(q(x∣a)∥p(x∣a)){\displaystyle D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))}[3][34]) This is minimized ifq(x∣a)=p(x∣a){\displaystyle q(x\mid a)=p(x\mid a)}over the whole support ofu(a){\displaystyle u(a)}; and we note that this result incorporates Bayes' theorem, if the new distributionu(a){\displaystyle u(a)}is in fact a δ function representing certainty thatahas one particular value. MDI can be seen as an extension ofLaplace'sPrinciple of Insufficient Reason, and thePrinciple of Maximum EntropyofE.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (seedifferential entropy), but the relative entropy continues to be just as relevant. In the engineering literature, MDI is sometimes called thePrinciple of Minimum Cross-Entropy(MCE) orMinxentfor short. Minimising relative entropy frommtopwith respect tomis equivalent to minimizing the cross-entropy ofpandm, since H(p,m)=H(p)+DKL(p∥m),{\displaystyle \mathrm {H} (p,m)=\mathrm {H} (p)+D_{\text{KL}}(p\parallel m),} which is appropriate if one is trying to choose an adequate approximation top. However, this is just as oftennotthe task one is trying to achieve. Instead, just as often it ismthat is some fixed prior reference measure, andpthat one is attempting to optimise by minimisingDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to beDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}, rather thanH(p,m){\displaystyle \mathrm {H} (p,m)}[citation needed]. Surprisals[37]add where probabilities multiply. The surprisal for an event of probabilitypis defined ass=−kln⁡p{\displaystyle s=-k\ln p}. Ifkis{1,1/ln⁡2,1.38×10−23}{\displaystyle \left\{1,1/\ln 2,1.38\times 10^{-23}\right\}}then surprisal is in{{\displaystyle \{}nats, bits, orJ/K}{\displaystyle J/K\}}so that, for instance, there areNbits of surprisal for landing all "heads" on a toss ofNcoins. Best-guess states (e.g. for atoms in a gas) are inferred by maximizing theaverage surprisalS(entropy) for a given set of control parameters (like pressurePor volumeV). This constrainedentropy maximization, both classically[38]and quantum mechanically,[39]minimizesGibbsavailability in entropy units[40]A≡−kln⁡Z{\displaystyle A\equiv -k\ln Z}whereZis a constrained multiplicity orpartition function. When temperatureTis fixed, free energy (T×A{\displaystyle T\times A}) is also minimized. Thus ifT,V{\displaystyle T,V}and number of moleculesNare constant, theHelmholtz free energyF≡U−TS{\displaystyle F\equiv U-TS}(whereUis energy andSis entropy) is minimized as a system "equilibrates." IfTandPare held constant (say during processes in your body), theGibbs free energyG=U+PV−TS{\displaystyle G=U+PV-TS}is minimized instead. The change in free energy under these conditions is a measure of availableworkthat might be done in the process. Thus available work for an ideal gas at constant temperatureTo{\displaystyle T_{o}}and pressurePo{\displaystyle P_{o}}isW=ΔG=NkToΘ(V/Vo){\displaystyle W=\Delta G=NkT_{o}\Theta (V/V_{o})}whereVo=NkTo/Po{\displaystyle V_{o}=NkT_{o}/P_{o}}andΘ(x)=x−1−ln⁡x≥0{\displaystyle \Theta (x)=x-1-\ln x\geq 0}(see alsoGibbs inequality). More generally[41]thework availablerelative to some ambient is obtained by multiplying ambient temperatureTo{\displaystyle T_{o}}by relative entropy ornet surprisalΔI≥0,{\displaystyle \Delta I\geq 0,}defined as the average value ofkln⁡(p/po){\displaystyle k\ln(p/p_{o})}wherepo{\displaystyle p_{o}}is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values ofVo{\displaystyle V_{o}}andTo{\displaystyle T_{o}}is thusW=ToΔI{\displaystyle W=T_{o}\Delta I}, where relative entropy ΔI=Nk[Θ(VVo)+32Θ(TTo)].{\displaystyle \Delta I=Nk\left[\Theta {\left({\frac {V}{V_{o}}}\right)}+{\frac {3}{2}}\Theta {\left({\frac {T}{T_{o}}}\right)}\right].} The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here.[42]Thus relative entropy measures thermodynamic availability in bits. Fordensity matricesPandQon aHilbert space, thequantum relative entropyfromQtoPis defined to be DKL(P∥Q)=Tr⁡(P(log⁡P−log⁡Q)).{\displaystyle D_{\text{KL}}(P\parallel Q)=\operatorname {Tr} (P(\log P-\log Q)).} Inquantum information sciencethe minimum ofDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}over all separable statesQcan also be used as a measure ofentanglementin the stateP. Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describesdistance to equilibriumor (when multiplied by ambient temperature) the amount ofavailable work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words,how much the model has yet to learn. Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting astatistical modelviaAkaike information criterionare particularly well described in papers[43]and a book[44]by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like themean squared deviation) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models. When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such asmaximum likelihoodandmaximum spacingestimators.[citation needed] Kullback & Leibler (1951)also considered the symmetrized function:[6] DKL(P∥Q)+DKL(Q∥P){\displaystyle D_{\text{KL}}(P\parallel Q)+D_{\text{KL}}(Q\parallel P)} which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see§ Etymologyfor the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used byHarold Jeffreysin 1948;[7]it is accordingly called theJeffreys divergence. This quantity has sometimes been used forfeature selectioninclassificationproblems, wherePandQare the conditionalpdfsof a feature under two different classes. In the Banking and Finance industries, this quantity is referred to asPopulation Stability Index(PSI), and is used to assess distributional shifts in model features through time. An alternative is given via theλ{\displaystyle \lambda }-divergence, Dλ(P∥Q)=λDKL(P∥λP+(1−λ)Q)+(1−λ)DKL(Q∥λP+(1−λ)Q),{\displaystyle D_{\lambda }(P\parallel Q)=\lambda D_{\text{KL}}(P\parallel \lambda P+(1-\lambda )Q)+(1-\lambda )D_{\text{KL}}(Q\parallel \lambda P+(1-\lambda )Q),} which can be interpreted as the expected information gain aboutXfrom discovering which probability distributionXis drawn from,PorQ, if they currently have probabilitiesλ{\displaystyle \lambda }and1−λ{\displaystyle 1-\lambda }respectively.[clarification needed][citation needed] The valueλ=0.5{\displaystyle \lambda =0.5}gives theJensen–Shannon divergence, defined by DJS=12DKL(P∥M)+12DKL(Q∥M){\displaystyle D_{\text{JS}}={\tfrac {1}{2}}D_{\text{KL}}(P\parallel M)+{\tfrac {1}{2}}D_{\text{KL}}(Q\parallel M)} whereMis the average of the two distributions, M=12(P+Q).{\displaystyle M={\tfrac {1}{2}}\left(P+Q\right).} We can also interpretDJS{\displaystyle D_{\text{JS}}}as the capacity of a noisy information channel with two inputs giving the output distributionsPandQ. The Jensen–Shannon divergence, like allf-divergences, islocallyproportional to theFisher information metric. It is similar to theHellinger metric(in the sense that it induces the same affine connection on astatistical manifold). Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M.[45][46] There are many other important measures ofprobability distance. Some of these are particularly connected with relative entropy. For example: Other notable measures of distance include theHellinger distance,histogram intersection,Chi-squared statistic,quadratic form distance,match distance,Kolmogorov–Smirnov distance, andearth mover's distance.[49] Just asabsoluteentropy serves as theoretical background fordatacompression,relativeentropy serves as theoretical background fordatadifferencing– the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the targetgiventhe source (minimum size of apatch).
https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_distance
Big data ethics, also known simply asdata ethics, refers to systemizing, defending, and recommending concepts of right and wrong conduct in relation todata, in particularpersonal data.[1]Since the dawn of theInternetthe sheer quantity and quality of data has dramatically increased and is continuing to do so exponentially.Big datadescribes this large amount of data that is so voluminous and complex that traditional data processing application software is inadequate to deal with them. Recent innovations in medical research and healthcare, such as high-throughput genome sequencing, high-resolution imaging, electronic medical patient records and a plethora of internet-connected health devices have triggered adata delugethat will reach the exabyte range in the near future. Data ethics is of increasing relevance as the quantity of data increases because of the scale of the impact. Big data ethics are different frominformation ethicsbecause the focus of information ethics is more concerned with issues ofintellectual propertyand concerns relating to librarians, archivists, and information professionals, while big data ethics is more concerned with collectors and disseminators ofstructuredorunstructured datasuch asdata brokers, governments, and large corporations. However, sinceartificial intelligenceormachine learning systemsare regularly built using big data sets, the discussions surrounding data ethics are often intertwined with those in the ethics of artificial intelligence.[2]More recently, issues of big data ethics have also been researched in relation with other areas of technology and science ethics, includingethics in mathematicsandengineering ethics, as many areas of applied mathematics and engineering use increasingly large data sets. Data ethics is concerned with the following principles:[3] Ownership of data involves determining rights and duties over property, such as the ability to exercise individual control over (including limit the sharing of) personal data comprising one'sdigital identity. The question of data ownership arises when someone records observations on an individual person. The observer and the observed both state a claim to the data. Questions also arise as to the responsibilities that the observer and the observed have in relation to each other. These questions have become increasingly relevant with the Internet magnifying the scale and systematization of observing people and their thoughts. The question of personal data ownership relates to questions of corporate ownership and intellectual property.[4] In the European Union, some people argue that theGeneral Data Protection Regulationindicates that individuals own their personal data, although this is contested.[5] Concerns have been raised around how biases can be integrated into algorithm design resulting in systematic oppression[6]whether consciously or unconsciously. These manipulations often stem from biases in the data, the design of the algorithm, or the underlying goals of the organization deploying them. One major cause ofalgorithmic biasis that algorithms learn from historical data, which may perpetuate existing inequities. In many cases, algorithms exhibit reduced accuracy when applied to individuals from marginalized or underrepresented communities. A notable example of this is pulse oximetry, which has shown reduced reliability for certain demographic groups due to a lack of sufficient testing or information on these populations.[7]Additionally, many algorithms are designed to maximize specific metrics, such as engagement or profit, without adequately considering ethical implications. For instance, companies like Facebook and Twitter have been criticized for providing anonymity to harassers and for allowing racist content disguised as humor to proliferate, as such content often increases engagement.[8]These challenges are compounded by the fact that many algorithms operate as "black boxes" for proprietary reasons, meaning that the reasoning behind their outputs is not fully understood by users. This opacity makes it more difficult to identify and address algorithmic bias. In terms of governance, big data ethics is concerned with which types of inferences and predictions should be made using big data technologies such as algorithms.[9] Anticipatory governance is the practice of usingpredictive analyticsto assess possible future behaviors.[10]This has ethical implications because it affords the ability to target particular groups and places which can encourage prejudice and discrimination[10]For example,predictive policinghighlights certain groups or neighborhoods which should be watched more closely than others which leads to more sanctions in these areas, and closer surveillance for those who fit the same profiles as those who are sanctioned.[11] The term "control creep" refers to data that has been generated with a particular purpose in mind but which is repurposed.[10]This practice is seen with airline industry data which has been repurposed for profiling and managing security risks at airports.[10] Privacy has been presented as a limitation to data usage which could also be considered unethical.[12]For example, the sharing of healthcare data can shed light on the causes of diseases, the effects of treatments, an can allow for tailored analyses based on individuals' needs.[12]This is of ethical significance in the big data ethics field because while many value privacy, the affordances of data sharing are also quite valuable, although they may contradict one's conception of privacy. Attitudes against data sharing may be based in a perceived loss of control over data and a fear of the exploitation of personal data.[12]However, it is possible to extract the value of data without compromising privacy. Government surveillance of big data has the potential to undermine individual privacy by collecting and storing data on phone calls, internet activity, and geolocation, among other things. For example, the NSA’s collection of metadata exposed in global surveillance disclosures raised concerns about whether privacy was adequately protected, even when the content of communications was not analyzed. The right to privacy is often complicated by legal frameworks that grant governments broad authority over data collection for “national security” purposes. In the United States, the Supreme Court has not recognized a general right to "informational privacy," or control over personal information, though legislators have addressed the issue selectively through specific statutes.[13]From an equity perspective, government surveillance and privacy violations tend to disproportionately harm marginalized communities. Historically, activists involved in theCivil rights movementwere frequently targets of government surveillance as they were perceived as subversive elements. Programs such asCOINTELPROexemplified this pattern, involving espionage against civil rights leaders. This pattern persists today, with evidence of ongoing surveillance of activists and organizations.[14] Additionally, the use of algorithms by governments to act on data obtained without consent introduces significant concerns about algorithmic bias. Predictive policing tools, for example, utilize historical crime data to predict “risky” areas or individuals, but these tools have been shown to disproportionately target minority communities.[15]One such tool, theCOMPASsystem, is a notable example; Black defendants are twice as likely to be misclassified as high risk compared to white defendants, and Hispanic defendants are similarly more likely to be classified as high risk than their white counterparts.[16]Marginalized communities often lack the resources or education needed to challenge these privacy violations or protect their data from nonconsensual use. Furthermore, there is a psychological toll, known as the “chilling effect,” where the constant awareness of being surveilled disproportionately impacts communities already facing societal discrimination. This effect can deter individuals from engaging in legal but potentially "risky" activities, such as protesting or seeking legal assistance, further limiting their freedoms and exacerbating existing inequities. Some scholars such as Jonathan H. King and Neil M. Richards are redefining the traditional meaning of privacy, and others to question whether or not privacy still exists.[9]In a 2014 article for theWake Forest Law Review, King and Richard argue that privacy in the digital age can be understood not in terms of secrecy but in term of regulations which govern and control the use of personal information.[9]In the European Union, the right to be forgotten entitles EU countries to force the removal or de-linking of personal data from databases at an individual's request if the information is deemed irrelevant or out of date.[17]According to Andrew Hoskins, this law demonstrates the moral panic of EU members over the perceived loss of privacy and the ability to govern personal data in the digital age.[18]In the United States, citizens have the right to delete voluntarily submitted data.[17]This is very different from the right to be forgotten because much of the data produced using big data technologies and platforms are not voluntarily submitted.[17]While traditional notions of privacy are under scrutiny, different legal frameworks related to privacy in the EU and US demonstrate how countries are grappling with these concerns in the context of big data. For example, the "right to be forgotten" in the EU and the right to delete voluntarily submitted data in the US illustrate the varying approaches to privacy regulation in the digital age.[19] The difference in value between the services facilitated by tech companies and the equity value of these tech companies is the difference in the exchange rate offered to the citizen and the "market rate" of the value of their data. Scientifically there are many holes in this rudimentary calculation: the financial figures of tax-evading companies are unreliable, either revenue or profit could be more appropriate, how a user is defined, a large number of individuals are needed for the data to be valuable, possible tiered prices for different people in different countries, etc. Although these calculations are crude, they serve to make the monetary value of data more tangible. Another approach is to find the data trading rates in the black market. RSA publishes a yearly cybersecurity shopping list that takes this approach.[20] This raises the economic question of whether free tech services in exchange for personal data is a worthwhile implicit exchange for the consumer. In the personal data trading model, rather than companies selling data, an owner can sell their personal data and keep the profit.[21] The idea of open data is centered around the argument that data should be freely available and should not have restrictions that would prohibit its use, such as copyright laws. As of 2014[update]many governments had begun to move towards publishing open datasets for the purpose of transparency and accountability.[22]This movement has gained traction via "open data activists" who have called for governments to make datasets available to allow citizens to themselves extract meaning from the data and perform checks and balances themselves.[22][9]King and Richards have argued that this call for transparency includes a tension between openness and secrecy.[9] Activists and scholars have also argued that because this open-sourced model of data evaluation is based on voluntary participation, the availability of open datasets has a democratizing effect on a society, allowing any citizen to participate.[23]To some, the availability of certain types of data is seen as a right and an essential part of a citizen's agency.[23] Open Knowledge Foundation(OKF) lists several dataset types it argues should be provided by governments for them to be truly open.[24]OKF has a tool called the Global Open Data Index (GODI), a crowd-sourced survey for measuring the openness of governments,[24]based on itsOpen Definition. GODI aims to be a tool for providing feedback to governments about the quality of their open datasets.[25] Willingness to share data varies from person to person. Preliminary studies have been conducted into the determinants of the willingness to share data. For example, some have suggested that baby boomers are less willing to share data than millennials.[26] The fallout fromEdward Snowden’s disclosuresin 2013 significantly reshaped public discourse around data collection and the privacy principle of big data ethics. The case revealed that governments controlled and possessed far more information about civilians than previously understood, violating the principle of ownership, particularly in ways that disproportionately affected disadvantaged communities. For instance, activists were frequently targeted, including members of movements such as Occupy Wall Street and Black Lives Matter.[14]This revelation prompted governments and organizations to revisit data collection and storage practices to better protect individual privacy while also addressing national security concerns. The case also exposed widespread online surveillance of other countries and their citizens, raising important questions about data sovereignty and ownership. In response, some countries, such as Brazil and Germany, took action to push back against these practices.[14]However, many developing nations lacked the technological independence necessary or were too generally dependent on the nations surveilling them to resist such surveillance, leaving them at a disadvantage in addressing these concerns. TheCambridge Analytica scandalhighlighted significant ethical concerns in the use of big data. Data was harvested from approximately 87 million Facebook users without their explicit consent and used to display targeted political advertisements. This violated the currency principle of big data ethics, as individuals were initially unaware of how their data was being exploited. The scandal revealed how data collected for one purpose could be repurposed for entirely different uses, bypassing users' consent and emphasizing the need for explicit and informed consent in data usage.[27]Additionally, the algorithms used for ad delivery were opaque, challenging the principles of transaction transparency and openness. In some cases, the political ads spread misinformation,[27]often disproportionately targeting disadvantaged groups and contributing to knowledge gaps. Marginalized communities and individuals with lower digital literacy were disproportionately affected as they were less likely to recognize or act against exploitation. In contrast, users with more resources or digital literacy could better safeguard their data, exacerbating existing power imbalances.
https://en.wikipedia.org/wiki/Big_data_ethics
In the mathematical discipline ofgraph theory, arainbow matchingin anedge-colored graphis amatchingin which all the edges have distinct colors. Given an edge-colored graphG= (V,E), a rainbow matchingMinGis a set of pairwise non-adjacent edges, that is, no two edges share a common vertex, such that all the edges in the set have distinct colors. A maximum rainbow matching is a rainbow matching that contains the largest possible number of edges. Rainbow matchings are of particular interest given their connection to transversals ofLatin squares. Denote byKn,nthecomplete bipartite graphonn+nvertices. Every propern-edge coloringofKn,ncorresponds to a Latin square of ordern. A rainbow matching then corresponds to atransversalof the Latin square, meaning a selection ofnpositions, one in each row and each column, containing distinct entries. This connection between transversals of Latin squares and rainbow matchings inKn,nhas inspired additional interest in the study of rainbow matchings intriangle-free graphs.[1] An edge-coloring is calledproperif each edge has a single color, and each two edges of the same color have no vertex in common. A proper edge-coloring does not guarantee the existence of a perfect rainbow matching. For example, consider the graphK2,2: the complete bipartite graph on 2+2 vertices. Suppose the edges(x1,y1)and(x2,y2)are colored green, and the edges(x1,y2)and(x2,y1)are colored blue. This is a proper coloring, but there are only two perfect matchings, and each of them is colored by a single color. This invokes the question: when does a large rainbow matching is guaranteed to exist? Much of the research on this question was published using the terminology ofLatin transversals in Latin squares. Translated into the rainbow matching terminology: A more general conjecture of Stein is that a rainbow matching of sizen– 1exists not only for a proper edge-coloring, but for any coloring in which each color appears on exactlynedges.[2] Some weaker versions of these conjectures have been proved: Wang asked if there is a functionf(d)such that every properly edge-colored graphGwith minimumdegreedand at leastf(d)vertices must have a rainbow matching of sized.[9]Obviously at least2dvertices are necessary, but how many are sufficient? Suppose that each edge may have several different colors, while each two edges of the same color must still have no vertex in common. In other words, each color is amatching. How many colors are needed in order to guarantee the existence of a rainbow matching? Drisko[12]studied this question using the terminology ofLatin rectangles. He proved that, for anyn≤k, in the complete bipartite graphKn,k, any family of2n– 1matchings (=colors) of sizenhas a perfect rainbow matching (of sizen). He applied this theorem to questions aboutgroup actionsanddifference sets. Drisko also showed that2n– 1matchings may be necessary: consider a family of2n– 2matchings, of whichn– 1are{ (x1,y1), (x2,y2), ..., (xn,yn)}and the othern– 1are{(x1,y2), (x2,y3), …, (xn,y1) }.Then the largest rainbow matching is of sizen– 1(e.g. take one edge from each of the firstn– 1matchings). Alon[13]showed that Drisko's theorem implies an older result[14]inadditive number theory. Aharoni and Berger[15]generalized Drisko's theorem to any bipartite graph, namely: any family of2n– 1matchings of sizenin a bipartite graph has a rainbow matching of sizen. Aharoni, Kotlar and Ziv[16]showed that Drisko's extremal example is unique in any bipartite graph. In general graphs,2n– 1matchings are no longer sufficient. Whennis even, one can add to Drisko's example the matching{ (x1,x2), (y1,y2), (x2,x3), (y2,y3), … }and get a family of2n– 1matchings without any rainbow matching. Aharoni, Berger, Chudnovsky, Howard and Seymour[17]proved that, in a general graph,3n– 2matchings (=colors) are always sufficient. It is not known whether this is tight: currently the best lower bound for evennis2nand for oddnit is2n– 1.[18] Afractional matchingis a set of edges with a non-negative weight assigned to each edge, such that the sum of weights adjacent to each vertex is at most 1. The size of a fractional matching is the sum of weights of all edges. It is a generalization of a matching, and can be used to generalize both the colors and the rainbow matching: It is known that, in a bipartite graph, the maximum fractional matching size equals the maximum matching size. Therefore, the theorem of Aharoni and Berger[15]is equivalent to the following. Letnbe any positive integer. Given any family of2n– 1fractional-matchings (=colors) of sizenin a bipartite graph, there exists a rainbow-fractional-matching of sizen. Aharoni, Holzman and Jiang extend this theorem to arbitrary graphs as follows. Letnbe any positive integer or half-integer. Any family of2nfractional-matchings (=colors) of size at leastnin an arbitrary graph has a rainbow-fractional-matching of sizen.[18]: Thm.1.5The2nis the smallest possible for fractional matchings in arbitrary graphs: the extremal case is constructed using an odd-length cycle. For the case of perfect fractional matchings, both the above theorems can derived from thecolorful Caratheodory theorem. For every edgeeinE, let1ebe a vector of size|V|, where for each vertexvinV, elementvin1eequals 1 ifeis adjacent tov, and 0 otherwise (so each vector1ehas 2 ones and|V|-2 zeros). Every fractional matching corresponds to aconical combinationof edges, in which each element is at most 1. A conical combination in which each element isexactly1 corresponds to aperfectfractional matching. In other words, a collectionFof edges admits a perfect fractional matching, if and only if1v(the vector of|V|ones) is contained in theconical hullof the vectors1eforeinF. Consider a graph with2nvertices, and suppose there are2nsubsets of edges, each of which admits a perfect fractional matching (of sizen). This means that the vector1vis in the conical hull of each of thesensubsets. By thecolorful Caratheodory theorem, there exists a selection of2nedges, one from each subset, that their conical hull contains1v. This corresponds to a rainbow perfect fractional matching. The expression2nis the dimension of the vectors1e- each vector has2nelements. Now, suppose that the graph is bipartite. In a bipartite graph, there is a constraint on the vectors1e: the sum of elements corresponding to each part of the graph must be 1. Therefore, the vectors1elive in a(2n– 1)-dimensional space. Therefore, the same argument as above holds when there are only2n– 1subsets of edges. Anr-uniformhypergraphis a set of hyperedges each of which contains exactlyrvertices (so a 2-uniform hypergraph is a just a graph without self-loops). Aharoni, Holzman and Jiang extend their theorem to such hypergraphs as follows. Letnbe any positive rational number. Any family of⌈r⋅n⌉fractional-matchings (=colors) of size at leastnin anr-uniform hypergraph has a rainbow-fractional-matching of sizen.[18]: Thm.1.6The⌈r⋅n⌉is the smallest possible whennis an integer. Anr-partite hypergraphis anr-uniform hypergraph in which the vertices are partitioned intordisjoint sets and each hyperedge contains exactly one vertex of each set (so a 2-partite hypergraph is a just bipartite graph). Letnbe any positive integer. Any family ofrn–r+ 1fractional-matchings (=colors) of size at leastnin anr-partite hypergraph has a rainbow-fractional-matching of sizen.[18]: Thm.1.7Thern–r+ 1is the smallest possible: the extremal case is whenn=r– 1is aprime power, and all colors are edges of the truncatedprojective planeof ordern. So each color hasn2=rn–r+ 1edges and a fractional matching of sizen, but any fractional matching of that size requires allrn–r+ 1edges.[19] For the case of perfect fractional matchings, both the above theorems can derived from thecolorful caratheodory theoremin the previous section. For a generalr-uniform hypergraph (admitting a perfect matching of sizen), the vectors1elive in a(rn)-dimensional space. For anr-uniformr-partite hypergraph, ther-partiteness constraints imply that the vectors1elive in a(rn–r+ 1)-dimensional space. The above results hold only for rainbowfractionalmatchings. In contrast, the case of rainbowintegralmatchings inr-uniform hypergraphs is much less understood. The number of required matchings for a rainbow matching of sizengrows at least exponentially withn. GareyandJohnsonhave shown that computing a maximum rainbow matching isNP-completeeven for edge-coloredbipartite graphs.[20] Rainbow matchings have been applied for solvingpacking problems.[21]
https://en.wikipedia.org/wiki/Rainbow_matching#hypergraphs
Inmathematicsandtheoretical physics, atensorisantisymmetricoralternating on(orwith respect to)an index subsetif it alternatessign(+/−) when any two indices of the subset are interchanged.[1][2]The index subset must generally either be allcovariantor allcontravariant. For example,Tijk…=−Tjik…=Tjki…=−Tkji…=Tkij…=−Tikj…{\displaystyle T_{ijk\dots }=-T_{jik\dots }=T_{jki\dots }=-T_{kji\dots }=T_{kij\dots }=-T_{ikj\dots }}holds when the tensor is antisymmetric with respect to its first three indices. If a tensor changes sign under exchange ofeachpair of its indices, then the tensor iscompletely(ortotally)antisymmetric. A completely antisymmetric covarianttensor fieldoforderk{\displaystyle k}may be referred to as adifferentialk{\displaystyle k}-form, and a completely antisymmetric contravariant tensor field may be referred to as ak{\displaystyle k}-vectorfield. A tensorAthat is antisymmetric on indicesi{\displaystyle i}andj{\displaystyle j}has the property that thecontractionwith a tensorBthat is symmetric on indicesi{\displaystyle i}andj{\displaystyle j}is identically 0. For a general tensorUwith componentsUijk…{\displaystyle U_{ijk\dots }}and a pair of indicesi{\displaystyle i}andj,{\displaystyle j,}Uhas symmetric and antisymmetric parts defined as: Similar definitions can be given for other pairs of indices. As the term "part" suggests, a tensor is the sum of its symmetric part and antisymmetric part for a given pair of indices, as inUijk…=U(ij)k…+U[ij]k….{\displaystyle U_{ijk\dots }=U_{(ij)k\dots }+U_{[ij]k\dots }.} A shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, in arbitrary dimensions, for an order 2 covariant tensorM,M[ab]=12!(Mab−Mba),{\displaystyle M_{[ab]}={\frac {1}{2!}}(M_{ab}-M_{ba}),}and for an order 3 covariant tensorT,T[abc]=13!(Tabc−Tacb+Tbca−Tbac+Tcab−Tcba).{\displaystyle T_{[abc]}={\frac {1}{3!}}(T_{abc}-T_{acb}+T_{bca}-T_{bac}+T_{cab}-T_{cba}).} In any 2 and 3 dimensions, these can be written asM[ab]=12!δabcdMcd,T[abc]=13!δabcdefTdef.{\displaystyle {\begin{aligned}M_{[ab]}&={\frac {1}{2!}}\,\delta _{ab}^{cd}M_{cd},\\[2pt]T_{[abc]}&={\frac {1}{3!}}\,\delta _{abc}^{def}T_{def}.\end{aligned}}}whereδab…cd…{\displaystyle \delta _{ab\dots }^{cd\dots }}is thegeneralized Kronecker delta, and theEinstein summation conventionis in use. More generally, irrespective of the number of dimensions, antisymmetrization overp{\displaystyle p}indices may be expressed asT[a1…ap]=1p!δa1…apb1…bpTb1…bp.{\displaystyle T_{[a_{1}\dots a_{p}]}={\frac {1}{p!}}\delta _{a_{1}\dots a_{p}}^{b_{1}\dots b_{p}}T_{b_{1}\dots b_{p}}.} In general, every tensor of rank 2 can be decomposed into a symmetric and anti-symmetric pair as:Tij=12(Tij+Tji)+12(Tij−Tji).{\displaystyle T_{ij}={\frac {1}{2}}(T_{ij}+T_{ji})+{\frac {1}{2}}(T_{ij}-T_{ji}).} This decomposition is not in general true for tensors of rank 3 or more, which have more complex symmetries. Totally antisymmetric tensors include:
https://en.wikipedia.org/wiki/Antisymmetric_tensor
Multimedia information retrieval(MMIRorMIR) is a research discipline ofcomputer sciencethat aims at extracting semantic information frommultimediadata sources.[1][failed verification]Data sources include directly perceivable media such asaudio,imageandvideo, indirectly perceivable sources such astext, semantic descriptions,[2]biosignalsas well as not perceivable sources such as bioinformation, stock prices, etc. The methodology of MMIR can be organized in three groups: Feature extraction is motivated by the sheer size of multimedia objects as well as their redundancy and, possibly, noisiness.[1]: 2[failed verification]Generally, two possible goals can be achieved by feature extraction: Multimedia Information Retrieval implies that multiple channels are employed for the understanding of media content.[5]Each of this channels is described by media-specific feature transformations. The resulting descriptions have to be merged to one description per media object. Merging can be performed by simple concatenation if the descriptions are of fixed size. Variable-sized descriptions – as they frequently occur in motion description – have to be normalized to a fixed length first. Frequently used methods for description filtering includefactor analysis(e.g. by PCA), singular value decomposition (e.g. as latent semantic indexing in text retrieval) and the extraction and testing of statistical moments. Advanced concepts such as theKalman filterare used for merging of descriptions. Generally, all forms of machine learning can be employed for the categorization of multimedia descriptions[1]: 125[failed verification]though some methods are more frequently used in one area than another. For example,hidden Markov modelsare state-of-the-art inspeech recognition, whiledynamic time warping– a semantically related method – is state-of-the-art in gene sequence alignment. The list of applicable classifiers includes the following: The selection of the best classifier for a given problem (test set with descriptions and class labels, so-calledground truth) can be performed automatically, for example, using theWekaData Miner. Models of Multimedia Information Retrieval Spoken Language Audio Retrieval Spoken Language Audio Retrieval focuses on audio content containing spoken words. It involves the transcription of spoken content into text using Automatic Speech Recognition (ASR) and indexing the transcriptions for text-based search. Key Features: Techniques: ASR for transcription and text indexing. Query Types: Text-based queries. Applications: Searching podcast transcripts. Analyzing customer service call logs. Finding specific phrases in meeting recordings. Challenges: Errors in ASR can reduce retrieval accuracy. Multilingual and accent variability requires robust systems. Non-Speech Audio Retrieval Non-Speech Audio Retrieval handles audio content without spoken words, such as music, environmental sounds, or sound effects. This model relies on extracting audio features like pitch, rhythm, and timbre to identify relevant audio. Key Features: Techniques: Acoustic feature extraction (e.g., spectrograms, MFCCs). Query Types: Audio samples or textual descriptions. Applications: Music recommendation systems. Environmental sound detection (e.g., gunshots, animal calls). Sound effect retrieval in media production. Challenges: Difficulty in bridging the semantic gap between user queries and low-level audio features. Efficient indexing of large datasets. Graph Retrieval Graph Retrieval retrieves information represented as graphs, which consist of nodes (entities) and edges (relationships). It is widely used in social networks, knowledge graphs, and bioinformatics. Key Features: Techniques: Graph matching, adjacency list/matrix storage, and graph databases (e.g., Neo4j). Query Types: Subgraphs, patterns, or textual queries. Applications: Social network analysis. Searching knowledge graphs. Molecular structure retrieval. Challenges: Computationally intensive subgraph matching. Scalability for large, complex graphs. Imagery Retrieval Imagery Retrieval retrieves images based on user input, such as textual descriptions or visual samples. It leverages both low-level features and semantic analysis for search. Key Features: Techniques: Content-Based Image Retrieval (CBIR), visual feature extraction, semantic analysis. Query Types: Text, sketches, or example images. Applications: Stock image search. E-commerce product matching. Medical imaging analysis. Challenges: Bridging the semantic gap between user queries and image content. Efficient indexing of large-scale image datasets. Video Retrieval Video Retrieval is the process of finding specific video content based on user queries. It involves analyzing both the visual and temporal features of videos. Key Features: Techniques: Keyframe extraction, motion pattern analysis, temporal indexing. Query Types: Textual descriptions, sample clips, or temporal queries. Applications: Streaming service recommendations. Surveillance footage analysis. Sports analytics. Challenges: Managing the large file sizes of video content. Efficient analysis of temporal sequences and multimodal features. Comparison of Retrieval Models Model Data Type Query Types Applications Spoken Language Audio Speech recordings Text queries Podcasts, meeting logs, call centers Non-Speech Audio Music, sound effects Audio samples or text Music apps, environmental sounds Graph Retrieval Graph structures Subgraphs, patterns Knowledge graphs, bioinformatics Imagery Retrieval Images Text, sketches, or images E-commerce, medical imaging Video Retrieval Videos (visual + temporal) Text, clips, or time queries Surveillance, sports analysis Conclusion Multimedia Information Retrieval plays a crucial role in organizing and accessing vast multimedia data repositories. The variety of retrieval models ensures that users can effectively interact with and extract insights from complex multimedia datasets. Future advancements in artificial intelligence and machine learning are expected to improve the accuracy and scalability of MIR systems. MMIR provides an overview over methods employed in the areas of information retrieval.[6][7]Methods of one area are adapted and employed on other types of media. Multimedia content is merged before the classification is performed. MMIR methods are, therefore, usually reused from other areas such as: TheInternational Journal of Multimedia Information Retrieval[8]documents the development of MMIR as a research discipline that is independent of these areas. See alsoHandbook of Multimedia Information Retrieval[9]for a complete overview over this research discipline.
https://en.wikipedia.org/wiki/Multimedia_information_retrieval
NumPy(pronounced/ˈnʌmpaɪ/NUM-py) is alibraryfor thePython programming language, adding support for large, multi-dimensionalarraysandmatrices, along with a large collection ofhigh-levelmathematicalfunctionsto operate on these arrays.[3]The predecessor of NumPy, Numeric, was originally created byJim Huguninwith contributions from several other developers. In 2005,Travis Oliphantcreated NumPy by incorporating features of the competing Numarray into Numeric, with extensive modifications. NumPy isopen-source softwareand has many contributors. NumPy is fiscally sponsored byNumFOCUS.[4] The Python programming language was not originally designed for numerical computing, but attracted the attention of the scientific and engineering community early on. In 1995 thespecial interest group(SIG)matrix-sigwas founded with the aim of defining anarraycomputing package; among its members was Python designer and maintainerGuido van Rossum, who extendedPython's syntax(in particular the indexing syntax[5]) to makearray computingeasier.[6] An implementation of a matrix package was completed by Jim Fulton, then generalized[further explanation needed]by Jim Hugunin and calledNumeric[6](also variously known as the "Numerical Python extensions" or "NumPy"), with influences from theAPLfamily of languages, Basis,MATLAB,FORTRAN,SandS+, and others.[7][8]Hugunin, a graduate student at theMassachusetts Institute of Technology(MIT),[8]: 10joined theCorporation for National Research Initiatives(CNRI) in 1997 to work onJPython,[6]leaving Paul Dubois ofLawrence Livermore National Laboratory(LLNL) to take over as maintainer.[8]: 10Other early contributors include David Ascher, Konrad Hinsen andTravis Oliphant.[8]: 10 A new package calledNumarraywas written as a more flexible replacement for Numeric.[9]Like Numeric, it too is now deprecated.[10][11]Numarray had faster operations for large arrays, but was slower than Numeric on small ones,[12]so for a time both packages were used in parallel for different use cases. The last version of Numeric (v24.2) was released on 11 November 2005, while the last version of numarray (v1.5.2) was released on 24 August 2006.[13] There was a desire to get Numeric into the Python standard library, but Guido van Rossum decided that the code was not maintainable in its state then.[when?][14] In early 2005, NumPy developer Travis Oliphant wanted to unify the community around a single array package and ported Numarray's features to Numeric, releasing the result as NumPy 1.0 in 2006.[9]This new project was part ofSciPy. To avoid installing the large SciPy package just to get an array object, this new package was separated and called NumPy. Support for Python 3 was added in 2011 with NumPy version 1.5.0.[15] In 2011,PyPystarted development on an implementation of the NumPy API for PyPy.[16]As of 2023, it is not yet fully compatible with NumPy.[17] NumPy targets theCPythonreference implementationof Python, which is a non-optimizingbytecodeinterpreter.Mathematical algorithmswritten for this version of Python often run much slower thancompiledequivalents due to the absence of compiler optimization. NumPy addresses the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays; using these requires rewriting some code, mostlyinner loops, using NumPy. Using NumPy in Python gives functionality comparable toMATLABsince they are both interpreted,[18]and they both allow the user to write fast programs as long as most operations work onarraysor matrices instead ofscalars. In comparison, MATLAB boasts a large number of additional toolboxes, notablySimulink, whereas NumPy is intrinsically integrated with Python, a more modern and completeprogramming language. Moreover, complementary Python packages are available; SciPy is a library that adds more MATLAB-like functionality andMatplotlibis aplottingpackage that provides MATLAB-like plotting functionality. Although matlab can perform sparse matrix operations, numpy alone cannot perform such operations and requires the use of the scipy.sparse library. Internally, both MATLAB and NumPy rely onBLASandLAPACKfor efficientlinear algebracomputations. Pythonbindingsof the widely usedcomputer visionlibraryOpenCVutilize NumPy arrays to store and operate on data. Since images with multiple channels are simply represented as three-dimensional arrays, indexing,slicingormaskingwith other arrays are very efficient ways to access specific pixels of an image. The NumPy array as universal data structure in OpenCV for images, extractedfeature points,filter kernelsand many more vastly simplifies the programming workflow anddebugging.[citation needed] Importantly, many NumPy operations release theglobal interpreter lock, which allows for multithreaded processing.[19] NumPy also provides a C API, which allows Python code to interoperate with external libraries written in low-level languages.[20] The core functionality of NumPy is its "ndarray", forn-dimensional array,data structure. These arrays arestridedviews on memory.[9]In contrast to Python's built-in list data structure, these arrays are homogeneously typed: all elements of a single array must be of the same type. Such arrays can also be views into memory buffers allocated byC/C++,Python, andFortranextensions to the CPython interpreter without the need to copy data around, giving a degree of compatibility with existing numerical libraries. This functionality is exploited by the SciPy package, which wraps a number of such libraries (notably BLAS and LAPACK). NumPy has built-in support formemory-mappedndarrays.[9] Inserting or appending entries to an array is not as trivially possible as it is with Python's lists. Thenp.pad(...)routine to extend arrays actually creates new arrays of the desired shape and padding values, copies the given array into the new one and returns it. NumPy'snp.concatenate([a1,a2])operation does not actually link the two arrays but returns a new one, filled with the entries from both given arrays in sequence. Reshaping the dimensionality of an array withnp.reshape(...)is only possible as long as the number of elements in the array does not change. These circumstances originate from the fact that NumPy's arrays must be views on contiguousmemory buffers. Algorithmsthat are not expressible as a vectorized operation will typically run slowly because they must be implemented in "pure Python", while vectorization may increasememory complexityof some operations from constant to linear, because temporary arrays must be created that are as large as the inputs. Runtime compilation of numerical code has been implemented by several groups to avoid these problems; open source solutions that interoperate with NumPy include numexpr[21]andNumba.[22]Cython andPythranare static-compiling alternatives to these. Many modernlarge-scalescientific computing applications have requirements that exceed the capabilities of the NumPy arrays. For example, NumPy arrays are usually loaded into a computer'smemory, which might have insufficient capacity for the analysis of largedatasets. Further, NumPy operations are executed on a singleCPU. However, many linear algebra operations can be accelerated by executing them onclustersof CPUs or of specialized hardware, such asGPUsandTPUs, which manydeep learningapplications rely on. As a result, several alternative array implementations have arisen in the scientific python ecosystem over the recent years, such asDaskfor distributed arrays andTensorFloworJAX[23]for computations on GPUs. Because of its popularity, these often implement asubsetof NumPy'sAPIor mimic it, so that users can change their array implementation with minimal changes to their code required.[3]A library namedCuPy,[24]accelerated byNvidia'sCUDAframework, has also shown potential for faster computing, being a 'drop-in replacement' of NumPy.[25] Iterative Python algorithm and vectorized NumPy version. Quickly wrap native code for faster scripts.[26][27][28]
https://en.wikipedia.org/wiki/Numpy
Instatistics,Poisson regressionis ageneralized linear modelform ofregression analysisused to modelcount dataandcontingency tables.[1]Poisson regression assumes the response variableYhas aPoisson distribution, and assumes thelogarithmof itsexpected valuecan be modeled by a linear combination of unknownparameters. A Poisson regression model is sometimes known as alog-linear model, especially when used to model contingency tables. Negative binomial regressionis a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution. This model is popular because it models the Poisson heterogeneity with a gamma distribution. Poisson regression models aregeneralized linear modelswith the logarithm as the (canonical)link function, and thePoisson distributionfunction as the assumed probability distribution of the response. Ifx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}is a vector ofindependent variables, then the model takes the form whereα∈R{\displaystyle \alpha \in \mathbb {R} }andβ∈Rn{\displaystyle \mathbf {\beta } \in \mathbb {R} ^{n}}. Sometimes this is written more compactly as wherex{\displaystyle \mathbf {x} }is now an (n+ 1)-dimensional vector consisting ofnindependent variables concatenated to the number one. Hereθ{\displaystyle \theta }is simplyβ{\displaystyle \beta }concatenated toα{\displaystyle \alpha }. Thus, when given a Poisson regression modelθ{\displaystyle \theta }and an input vectorx{\displaystyle \mathbf {x} }, the predicted mean of the associated Poisson distribution is given by IfYi{\displaystyle Y_{i}}areindependentobservations with corresponding valuesxi{\displaystyle \mathbf {x} _{i}}of the predictor variables, thenθ{\displaystyle \theta }can be estimated bymaximum likelihood. The maximum-likelihood estimates lack aclosed-form expressionand must be found by numerical methods. The probability surface for maximum-likelihood Poisson regression is always concave, making Newton–Raphson or other gradient-based methods appropriate estimation techniques. Suppose we have a model with a single predictor, that is,n=1{\displaystyle n=1}: Suppose we compute the predicted values at point(Y2,x2){\displaystyle (Y_{2},x_{2})}and(Y1,x1){\displaystyle (Y_{1},x_{1})}: By subtracting the first from the second: Suppose now thatx2=x1+1{\displaystyle x_{2}=x_{1}+1}. We obtain: So the coefficient of the model is to be interpreted as the increase in the logarithm of the count of the outcome variable when the independent variable increases by 1. By applying the rules of logarithms: That is, when the independent variable increases by 1, the outcome variable is multiplied by the exponentiated coefficient. The exponentiated coefficient is also called theincidence ratio. Often, the object of interest is the average partial effect or average marginal effect∂E(Y|x)∂x{\displaystyle {\frac {\partial E(Y|x)}{\partial x}}}, which is interpreted as the change in the outcomeY{\displaystyle Y}for a one unit change in the independent variablex{\displaystyle x}. The average partial effect in the Poisson model for a continuousx{\displaystyle x}can be shown to be:[2] This can be estimated using the coefficient estimates from the Poisson modelθ^=(α^,β^){\displaystyle {\hat {\theta }}=({\hat {\alpha }},{\hat {\beta }})}with the observed values ofx{\displaystyle \mathbb {x} }. Given a set of parametersθand an input vectorx, the mean of the predictedPoisson distribution, as stated above, is given by and thus, the Poisson distribution'sprobability mass functionis given by Now suppose we are given a data set consisting ofmvectorsxi∈Rn+1,i=1,…,m{\displaystyle x_{i}\in \mathbb {R} ^{n+1},\,i=1,\ldots ,m}, along with a set ofmvaluesy1,…,ym∈N{\displaystyle y_{1},\ldots ,y_{m}\in \mathbb {N} }. Then, for a given set of parametersθ, the probability of attaining this particular set of data is given by By the method ofmaximum likelihood, we wish to find the set of parametersθthat makes this probability as large as possible. To do this, the equation is first rewritten as alikelihood functionin terms ofθ: Note that the expression on theright hand sidehas not actually changed. A formula in this form is typically difficult to work with; instead, one uses thelog-likelihood: Notice that the parametersθonly appear in the first two terms of each term in the summation. Therefore, given that we are only interested in finding the best value forθwe may drop theyi! and simply write To find a maximum, we need to solve an equation∂ℓ(θ∣X,Y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta \mid X,Y)}{\partial \theta }}=0}which has no closed-form solution. However, the negative log-likelihood,−ℓ(θ∣X,Y){\displaystyle -\ell (\theta \mid X,Y)}, is a convex function, and so standardconvex optimizationtechniques such asgradient descentcan be applied to find the optimal value ofθ. Poisson regression may be appropriate when the dependent variable is a count, for instance ofeventssuch as the arrival of a telephone call at a call centre.[3]The events must be independent in the sense that the arrival of one call will not make another more or less likely, but the probability per unit time of events is understood to be related to covariates such as time of day. Poisson regression may also be appropriate for rate data, where the rate is a count of events divided by some measure of that unit'sexposure(a particular unit of observation).[4]For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area. Demographers may model death rates in geographic areas as the count of deaths divided by person−years. More generally, event rates can be calculated as events per unit time, which allows the observation window to vary for each unit. In these examples, exposure is respectively unit area, person−years and unit time. In Poisson regression this is handled as anoffset. If the rate is count/exposure, multiplying both sides of the equation by exposure moves it to the right side of the equation. When both sides of the equation are then logged, the final model contains log(exposure) as a term that is added to the regression coefficients. This logged variable, log(exposure), is called the offset variable and enters on the right-hand side of the equation with a parameter estimate (for log(exposure)) constrained to 1. which implies Offset in the case of aGLMinRcan be achieved using theoffset()function: A characteristic of thePoisson distributionis that its mean is equal to its variance. In certain circumstances, it will be found that the observedvarianceis greater than the mean; this is known asoverdispersionand indicates that the model is not appropriate. A common reason is the omission of relevant explanatory variables, or dependent observations. Under some circumstances, the problem of overdispersion can be solved by usingquasi-likelihoodestimation or anegative binomial distributioninstead.[5][6] Ver Hoef and Boveng described the difference between quasi-Poisson (also called overdispersion with quasi-likelihood) and negative binomial (equivalent to gamma-Poisson) as follows: IfE(Y) =μ, the quasi-Poisson model assumes var(Y) =θμwhile the gamma-Poisson assumes var(Y) =μ(1 +κμ), whereθis the quasi-Poisson overdispersion parameter, andκis the shape parameter of thenegative binomial distribution. For both models, parameters are estimated usingiteratively reweighted least squares. For quasi-Poisson, the weights areμ/θ. For negative binomial, the weights areμ/(1 +κμ). With largeμand substantial extra-Poisson variation, the negative binomial weights are capped at 1/κ. Ver Hoef and Boveng discussed an example where they selected between the two by plotting mean squared residuals vs. the mean.[7] Another common problem with Poisson regression is excess zeros: if there are two processes at work, one determining whether there are zero events or any events, and a Poisson process determining how many events there are, there will be more zeros than a Poisson regression would predict. An example would be the distribution of cigarettes smoked in an hour by members of a group where some individuals are non-smokers. Othergeneralized linear modelssuch as thenegative binomialmodel orzero-inflated modelmay function better in these cases. On the contrary, underdispersion may pose an issue for parameter estimation.[8] Poisson regression creates proportional hazards models, one class ofsurvival analysis: seeproportional hazards modelsfor descriptions of Cox models. When estimating the parameters for Poisson regression, one typically tries to find values forθthat maximize the likelihood of an expression of the form wheremis the number of examples in the data set, andp(yi;eθ′xi){\displaystyle p(y_{i};e^{\theta 'x_{i}})}is theprobability mass functionof thePoisson distributionwith the mean set toeθ′xi{\displaystyle e^{\theta 'x_{i}}}. Regularization can be added to this optimization problem by instead maximizing[9] for some positive constantλ{\displaystyle \lambda }. This technique, similar toridge regression, can reduceoverfitting.
https://en.wikipedia.org/wiki/Negative_binomial_regression
CRYPTRECis theCryptography Research and Evaluation Committeesset up by theJapanese Governmentto evaluate and recommendcryptographictechniques for government and industrial use. It is comparable in many respects to theEuropean Union'sNESSIEproject and to theAdvanced Encryption Standard processrun byNational Institute of Standards and Technologyin theU.S. There is some overlap, and some conflict, between the NESSIE selections and the CRYPTREC draft recommendations. Both efforts include some of the best cryptographers in the world[citation needed]therefore conflicts in their selections and recommendations should be examined with care. For instance, CRYPTREC recommends several 64 bit block ciphers while NESSIE selected none, but CRYPTREC was obliged by its terms of reference to take into account existing standards and practices, while NESSIE was not. Similar differences in terms of reference account for CRYPTREC recommending at least onestream cipher,RC4, while the NESSIE report specifically said that it wasnotablethat they had not selected any of those considered. RC4 is widely used in theSSL/TLSprotocols; nevertheless, CRYPTREC recommended that it only be used with 128-bit keys. Essentially the same consideration led to CRYPTREC's inclusion of 160-bit message digest algorithms, despite their suggestion that they be avoided in new system designs. Also, CRYPTREC was unusually careful to examine variants and modifications of the techniques, or at least to discuss their care in doing so; this resulted in particularly detailed recommendations regarding them. CRYPTREC includes members from Japaneseacademia,industry, andgovernment. It was started in May 2000 by combining efforts from several agencies who were investigating methods and techniques for implementing 'e-Government' in Japan. Presently, it is sponsored by It is also the organization that provides technical evaluation and recommendations concerning regulations that implement Japanese laws. Examples include the Electronic Signatures and Certification Services (Law 102 of FY2000, taking effect as from April 2001), the Basic Law on the Formulation of an Advanced Information and Telecommunications Network Society of 2000 (Law 144 of FY2000), and the Public Individual Certification Law of December 2002. Furthermore, CRYPTEC has responsibilities with regard to the Japanese contribution to theISO/IECJTC 1/SC27 standardization effort. In the first release in 2003,[1]many Japanese ciphers were selected for the "e-Government Recommended Ciphers List":CIPHERUNICORN-E(NEC),Hierocrypt-L1(Toshiba), andMISTY1(Mitsubishi Electric) as 64 bit block ciphers,Camellia(Nippon Telegraph and Telephone,Mitsubishi Electric),CIPHERUNICORN-A(NEC),Hierocrypt-3(Toshiba), andSC2000(Fujitsu) as 128 bit block ciphers, and finallyMUGIandMULTI-S01(Hitachi) as stream ciphers. In the revised release of 2013,[2]the list was divided into three: "e-Government Recommended Ciphers List", "Candidate Recommended Ciphers List", and "Monitored Ciphers List". Most of the Japanese ciphers listed in the previous list (except for Camellia) have moved from the "Recommended Ciphers List" to the "Candidate Recommended Ciphers List". There were several new proposals, such asCLEFIA(Sony) as a 128 bit block cipher as well asKCipher-2(KDDI) andEnocoro-128v2(Hitachi) as stream ciphers. However, only KCipher-2 has been listed on the "e-Government Recommended Ciphers List". The reason why most Japanese ciphers have not been selected as "Recommended Ciphers" is not that these ciphers are necessarily unsafe, but that these ciphers are not widely used in commercial products, open-source projects, governmental systems, or international standards. There is the possibility that ciphers listed on "Candidate Recommended Ciphers List" will be moved to the "e-Government Recommended Ciphers List" when they are utilized more widely. In addition, 128 bitRC4andSHA-1are listed on "Monitored Ciphers List". These are unsafe and only permitted to remain compatible with old systems. After the revision in 2013, there are several updates such as addition ofChaCha20-Poly1305,EdDSAandSHA-3, move ofTriple DESto Monitored list, and deletion ofRC4, etc. As of March 2023[update]
https://en.wikipedia.org/wiki/CRYPTREC
TheHurst exponentis used as a measure oflong-term memoryoftime series. It relates to theautocorrelationsof the time series, and the rate at which these decrease as the lag between pairs of values increases. Studies involving the Hurst exponent were originally developed inhydrologyfor the practical matter of determining optimum dam sizing for theNile river's volatile rain and drought conditions that had been observed over a long period of time.[1][2]The name "Hurst exponent", or "Hurst coefficient", derives fromHarold Edwin Hurst(1880–1978), who was the lead researcher in these studies; the use of the standard notationHfor the coefficient also relates to his name. Infractal geometry, thegeneralized Hurst exponenthas been denoted byHorHqin honor of both Harold Edwin Hurst andLudwig Otto Hölder(1859–1937) byBenoît Mandelbrot(1924–2010).[3]His directly related tofractal dimension,D, and is a measure of a data series' "mild" or "wild" randomness.[4] The Hurst exponent is referred to as the "index of dependence" or "index of long-range dependence". It quantifies the relative tendency of a time series either to regress strongly to the mean or to cluster in a direction.[5]A valueHin the range 0.5–1 indicates a time series with long-term positive autocorrelation, meaning that the decay in autocorrelation is slower than exponential, following apower law; for the series it means that a high value tends to be followed by another high value and that future excursions to more high values do occur. A value in the range 0 – 0.5 indicates a time series with long-term switching between high and low values in adjacent pairs, meaning that a single high value will probably be followed by a low value and that the value after that will tend to be high, with this tendency to switch between high and low values lasting a long time into the future, also following a power law. A value ofH=0.5 indicatesshort-memory, with (absolute) autocorrelations decaying exponentially quickly to zero. The Hurst exponent,H, is defined in terms of the asymptotic behaviour of therescaled rangeas a function of the time span of a time series as follows;[6][7] E[R(n)S(n)]=CnHasn→∞,{\displaystyle \mathbb {E} \left[{\frac {R(n)}{S(n)}}\right]=Cn^{H}{\text{ as }}n\to \infty \,,}where For self-similar time series,His directly related tofractal dimension,D, where 1 <D< 2, such thatD= 2 -H. The values of the Hurst exponent vary between 0 and 1, with higher values indicating a smoother trend, less volatility, and less roughness.[8] For more general time series or multi-dimensional process, the Hurst exponent and fractal dimension can be chosen independently, as the Hurst exponent represents structure over asymptotically longer periods, while fractal dimension represents structure over asymptotically shorter periods.[9] A number of estimators of long-range dependence have been proposed in the literature. The oldest and best-known is the so-calledrescaled range(R/S) analysis popularized by Mandelbrot and Wallis[3][10]and based on previous hydrological findings of Hurst.[1]Alternatives includeDFA, Periodogram regression,[11]aggregated variances,[12]local Whittle's estimator,[13]wavelet analysis,[14][15]both in thetime domainandfrequency domain. To estimate the Hurst exponent, one must first estimate the dependence of therescaled rangeon the time spannof observation.[7]A time series of full lengthNis divided into a number of nonoverlapping shorter time series of lengthn, wherentakes valuesN,N/2,N/4, ... (in the convenient case thatNis a power of 2). The average rescaled range is then calculated for each value ofn. For each such time series of lengthn{\displaystyle n},X=X1,X2,…,Xn{\displaystyle X=X_{1},X_{2},\dots ,X_{n}\,}, the rescaled range is calculated as follows:[6][7] The Hurst exponent is estimated by fitting thepower lawE[R(n)/S(n)]=CnH{\displaystyle \mathbb {E} [R(n)/S(n)]=Cn^{H}}to the data. This can be done by plottinglog⁡[R(n)/S(n)]{\displaystyle \log[R(n)/S(n)]}as a function oflog⁡n{\displaystyle \log n}, and fitting a straight line; the slope of the line givesH{\displaystyle H}. A more principled approach would be to fit the power law in a maximum-likelihood fashion.[16]Such a graph is called a box plot. However, this approach is known to produce biased estimates of the power-law exponent.[clarification needed]For smalln{\displaystyle n}there is a significant deviation from the 0.5 slope.[clarification needed]Anis and Lloyd[17]estimated the theoretical (i.e., for white noise)[clarification needed]values of the R/S statistic to be: E[R(n)/S(n)]={Γ(n−12)πΓ(n2)∑i=1n−1n−ii,forn≤3401nπ2∑i=1n−1n−ii,forn>340{\displaystyle \mathbb {E} [R(n)/S(n)]={\begin{cases}{\frac {\Gamma ({\frac {n-1}{2}})}{{\sqrt {\pi }}\Gamma ({\frac {n}{2}})}}\sum \limits _{i=1}^{n-1}{\sqrt {\frac {n-i}{i}}},&{\text{for }}n\leq 340\\{\frac {1}{\sqrt {n{\frac {\pi }{2}}}}}\sum \limits _{i=1}^{n-1}{\sqrt {\frac {n-i}{i}}},&{\text{for }}n>340\end{cases}}} whereΓ{\displaystyle \Gamma }is theEuler gamma function.[clarification needed]The Anis-Lloyd corrected R/S Hurst exponent[clarification needed]is calculated as 0.5 plus the slope ofR(n)/S(n)−E[R(n)/S(n)]{\displaystyle R(n)/S(n)-\mathbb {E} [R(n)/S(n)]}. No asymptotic distribution theory has been derived for most of the Hurst exponent estimators so far. However, Weron[18]usedbootstrappingto obtain approximate functional forms for confidence intervals of the two most popular methods, i.e., for the Anis-Lloyd[17]corrected R/S analysis: and forDFA: HereM=log2⁡N{\displaystyle M=\log _{2}N}andN{\displaystyle N}is the series length. In both cases only subseries of lengthn>50{\displaystyle n>50}were considered for estimating the Hurst exponent; subseries of smaller length lead to a high variance of the R/S estimates. The basic Hurst exponent can be related to the expected size of changes, as a function of the lag between observations, as measured by E(|Xt+τ−Xt|2). For the generalized form of the coefficient, the exponent here is replaced by a more general term, denoted byq. There are a variety of techniques that exist for estimatingH, however assessing the accuracy of the estimation can be a complicated issue. Mathematically, in one technique, the Hurst exponent can be estimated such that:[19][20]Hq=H(q),{\displaystyle H_{q}=H(q),}for a time seriesg(t),t=1,2,…{\displaystyle g(t),t=1,2,\dots }may be defined by the scaling properties of itsstructurefunctionsSq{\displaystyle S_{q}}(τ{\displaystyle \tau }):Sq=⟨|g(t+τ)−g(t)|q⟩t∼τqH(q),{\displaystyle S_{q}=\left\langle \left|g(t+\tau )-g(t)\right|^{q}\right\rangle _{t}\sim \tau ^{qH(q)},}whereq>0{\displaystyle q>0},τ{\displaystyle \tau }is the time lag and averaging is over the time windowt≫τ,{\displaystyle t\gg \tau ,}usually the largest time scale of the system. Practically, in nature, there is no limit to time, and thusHis non-deterministic as it may only be estimated based on the observed data; e.g., the most dramatic daily move upwards ever seen in a stock market index can always be exceeded during some subsequent day.[21] In the above mathematical estimation technique, the functionH(q)contains information about averaged generalized volatilities at scaleτ{\displaystyle \tau }(onlyq= 1, 2are used to define the volatility). In particular, theH1exponent indicates persistent (H1>1⁄2) or antipersistent (H1<1⁄2) behavior of the trend. For the BRW (brown noise,1/f2{\displaystyle 1/f^{2}}) one getsHq=12,{\displaystyle H_{q}={\frac {1}{2}},}and forpink noise(1/f{\displaystyle 1/f})Hq=0.{\displaystyle H_{q}=0.} The Hurst exponent forwhite noiseis dimension dependent,[22]and for 1D and 2D it isHq1D=−12,Hq2D=−1.{\displaystyle H_{q}^{1D}=-{\frac {1}{2}},\quad H_{q}^{2D}=-1.} For the popularLévy stable processesandtruncated Lévy processeswith parameter α it has been found that Hq=q/α,{\displaystyle H_{q}=q/\alpha ,}forq<α{\displaystyle q<\alpha }, andHq=1{\displaystyle H_{q}=1}forq≥α{\displaystyle q\geq \alpha }.Multifractal detrended fluctuation analysis[23]is one method to estimateH(q){\displaystyle H(q)}from non-stationary time series. WhenH(q){\displaystyle H(q)}is a non-linear function of q the time series is amultifractal system. In the above definition two separate requirements are mixed together as if they would be one.[24]Here are the two independent requirements: (i)stationarity of the increments,x(t+T) −x(t) =x(T) −x(0)in distribution. This is the condition that yields longtime autocorrelations. (ii)Self-similarityof the stochastic process then yields variance scaling, but is not needed for longtime memory. E.g., bothMarkov processes(i.e., memory-free processes) andfractional Brownian motionscale at the level of 1-point densities (simple averages), but neither scales at the level of pair correlations or, correspondingly, the 2-point probability density.[clarification needed] An efficient market requires amartingalecondition, and unless the variance is linear in the time this produces nonstationary increments,x(t+T) −x(t) ≠x(T) −x(0). Martingales are Markovian at the level of pair correlations, meaning that pair correlations cannot be used to beat a martingale market. Stationary increments with nonlinear variance, on the other hand, induce the longtime pair memory offractional Brownian motionthat would make the market beatable at the level of pair correlations. Such a market would necessarily be far from "efficient". An analysis of economic time series by means of the Hurst exponent usingrescaled rangeandDetrended fluctuation analysisis conducted by econophysicist A.F. Bariviera.[25]This paper studies the time varying character ofLong-range dependencyand, thus of informational efficiency. Hurst exponent has also been applied to the investigation oflong-range dependencyinDNA,[26]and photonicband gapmaterials.[27]
https://en.wikipedia.org/wiki/Hurst_exponent
Ahybrid wordorhybridismis awordthatetymologicallyderives from at least two languages. Such words are a type ofmacaronic language. The most common form of hybrid word inEnglishcombinesLatinandGreekparts. Since manyprefixesandsuffixesin English are of Latin or Greeketymology, it is straightforward to add a prefix or suffix from one language to an English word that comes from a different language, thus creating a hybrid word.[citation needed] Hybridisms were formerly often considered to bebarbarisms.[1][2] Modern Hebrewabounds with non-Semiticderivational affixes, which are applied to words of both Semitic and non-Semitic descent. The following hybrid words consist of a Hebrew-descent word and a non-Semitic descent suffix:[15] The following Modern Hebrew hybrid words have an international prefix: Some hybrid words consist of both a non-Hebrew word and a non-Hebrew suffix of different origins: Some hybrid words consist of a non-Hebrew word and a Hebrew suffix: Modern Hebrew also has a productive derogatory prefixalshm-, which results in an 'echoic expressive'. For example,um shmum(או״ם־שמו״ם‎), literally 'United Nations shm-United Nations', was a pejorative description by Israel's first Prime Minister,David Ben-Gurion, of theUnited Nations, called in Modern Hebrewumot meukhadot(אומות מאוחדות‎) and abbreviatedum(או״ם‎). Thus, when a Hebrew speaker would like to express their impatience with or disdain for philosophy, they can sayfilosófya-shmilosófya(פילוסופיה־שמילוסופיה‎). Modern Hebrewshm-is traceable back toYiddish, and is found in English as well asshm-reduplication. This is comparable to the Turkic initial m-segment conveying a sense of 'and so on' as in Turkishdergi mergiokumuyor, literally 'magazine "shmagazine" read:NEGATIVE:PRESENT:3rd.person.singular', i.e. '(He) doesn't read magazine, journals or anything like that'.[15] InFilipino, hybrid words are calledsiyokoy(literally "merman"). For example, the wordconcernado("concerned") has "concern-" come from English and "-ado" come from Spanish. InJapanese, hybrid words are common inkango(words formed fromkanjicharacters) in which some of the characters may be pronounced using Chinese pronunciations (on'yomi,from Chinese morphemes), and others in the same word are pronounced using Japanese pronunciations (kun'yomi,from Japanese morphemes). These words are known asjūbako(重箱) oryutō(湯桶), which are themselves examples of this kind of compound (they areautological words): the first character ofjūbakois read usingon'yomi, the secondkun'yomi, while it is the other way around withyutō. Other examples include 場所basho"place" (kun-on), 金色kin'iro"golden" (on-kun) and 合気道aikidō"the martial artAikido" (kun-on-on). Some hybrid words are neitherjūbakonoryutō(縦中横tatechūyoko(kun-on-kun)). Foreign words may also be hybridized with Chinese or Japanese readings in slang words such as 高層ビルkōsōbiru"high-rise building" (on-on-katakana) and 飯テロmeshitero"food terrorism" (kun-katakana).
https://en.wikipedia.org/wiki/Hybrid_word
Incryptography, anaccumulatoris aone waymembershiphash function. It allows users to certify that potential candidates are a member of a certainsetwithout revealing the individual members of the set. This concept was formally introduced by Josh Benaloh and Michael de Mare in 1993.[1][2] There are several formal definitions which have been proposed in the literature. This section lists them by proposer, in roughly chronological order.[2] Benaloh and de Mare define a one-way hash function as a family of functionshℓ:Xℓ×Yℓ→Zℓ{\displaystyle h_{\ell }:X_{\ell }\times Y_{\ell }\to Z_{\ell }}which satisfy the following three properties:[1][2] (With the first two properties, one recovers the normal definition of a cryptographic hash function.) From such a function, one defines the "accumulated hash" of a set{y1,…,ym}{\displaystyle \{y_{1},\dots ,y_{m}\}}and starting valuex{\displaystyle x}w.r.t. a valuez{\displaystyle z}to beh(h(⋯h(h(x,y1),y2),…,ym−1),ym){\displaystyle h(h(\cdots h(h(x,y_{1}),y_{2}),\dots ,y_{m-1}),y_{m})}. The result, does not depend on the order of elementsy1,y2,...,yn{\displaystyle y_{1},y_{2},...,y_{n}}becauseh{\displaystyle h}is quasi-commutative.[1][2] Ify1,y2,...,yn{\displaystyle y_{1},y_{2},...,y_{n}}belong to some users of a cryptosystem, then everyone can compute the accumulated valuez.{\displaystyle z.}Also, the user ofyi{\displaystyle y_{i}}can compute the partial accumulated valuezi{\displaystyle z_{i}}of(y1,...,yi−1,yi+1,...,yn){\displaystyle (y_{1},...,y_{i-1},y_{i+1},...,y_{n})}. Then,h(zi,yi)=z.{\displaystyle h(z_{i},y_{i})=z.}So thei−{\displaystyle i-}user can provide the pair(zi,yi){\displaystyle (z_{i},y_{i})}to any other part, in order to authenticateyi{\displaystyle y_{i}}. The basic functionality of a quasi-commutative hash function is not immediate from the definition. To fix this, Barić and Pfitzmann defined a slightly more general definition, which is the notion of anaccumulator schemeas consisting of the following components:[2][3] It is relatively easy to see that one can define an accumulator scheme from any quasi-commutative hash function, using the technique shown above.[2] One observes that, for many applications, the set of accumulated values will change many times. Naïvely, one could completely redo the accumulator calculation every time; however, this may be inefficient, especially if our set is very large and the change is very small. To formalize this intuition, Camenish and Lysyanskaya defined adynamic accumulator schemeto consist of the 4 components of an ordinary accumulator scheme, plus three more:[2][4] Fazio and Nicolosi note that since Add, Del, and Upd can be simulated by rerunning Eval and Wit, this definition does not add any fundamentally new functionality.[2] One example ismultiplicationover largeprime numbers. This is a cryptographic accumulator, since it takes superpolynomial time tofactora composite number (at least according to conjecture), but it takes only a small amount of time (polynomial in size) to divide a prime into an integer to check if it is one of the factors and/or to factor it out. New members may be added or subtracted to the set of factors by multiplying or factoring out the number respectively. In this system, two accumulators that have accumulated a single shared prime can have it trivially discovered by calculating their GCD, even without prior knowledge of the prime (which would otherwise require prime factorization of the accumulator to discover).[citation needed] More practical accumulators use aquasi-commutativehash function, so that the size of the accumulator does not grow with the number of members. For example, Benaloh and de Mare propose a cryptographic accumulator inspired byRSA: the quasi-commutative functionh(x,y):=xy(modn){\displaystyle h(x,y):=x^{y}{\pmod {n}}}for some composite numbern{\displaystyle n}. They recommend to choosen{\displaystyle n}to be arigidinteger (i.e. the product of twosafe primes).[1]Barić and Pfitzmann proposed a variant wherey{\displaystyle y}was restricted to be prime and at mostn/4{\displaystyle n/4}(this constant is very close toϕ(n){\displaystyle \phi (n)}, but does not leak information about the prime factorization ofn{\displaystyle n}).[2][3] David Naccacheobserved in 1993 thaten,c(x,y):=xycy−1(modn){\displaystyle e_{n,c}(x,y):=x^{y}c^{y-1}{\pmod {n}}}is quasi-commutative for all constantsc,n{\displaystyle c,n}, generalizing the previous RSA-inspired cryptographic accumulator. Naccache also noted that theDickson polynomialsare quasi-commutative in the degree, but it is unknown whether this family of functions is one-way.[1] In 1996, Nyberg constructed an accumulator which is provably information-theoretically secure in therandom oracle model. Choosing some upper limitN=2d{\displaystyle N=2^{d}}for the number of items that can be securely accumulated andλ{\displaystyle \lambda }the security parameter, define the constantℓ:≈elog2⁡(e)λNlog2⁡(N){\displaystyle \ell :\approx {\frac {e}{\log _{2}(e)}}\lambda N\log _{2}(N)}to be an integer multiple ofd{\displaystyle d}(so that one can writeℓ=rd{\displaystyle \ell =rd}) and letH:{0,1}∗→{0,1}ℓ{\displaystyle H:\{0,1\}^{*}\to \{0,1\}^{\ell }}be somecryptographically secure hash function. Choose a keyk{\displaystyle k}as a randomr{\displaystyle r}-bit bitstring. Then, to accumulate using Nyberg's scheme, use the quasi-commutative hash functionh(x,y):=x⊙αr(H(y)){\displaystyle h(x,y):=x\odot \alpha _{r}(H(y))}, where⊙{\displaystyle \odot }is thebitwise andoperation andαr:{0,1}ℓ→{0,1}r{\displaystyle \alpha _{r}:\{0,1\}^{\ell }\to \{0,1\}^{r}}is the function that interprets its input as a sequence ofd{\displaystyle d}-bit bitstrings of lengthr{\displaystyle r}, replaces every all-zero bitstring with a single 0 and every other bitstring with a 1, and outputs the result.[2][5] Haber and Stornetta showed in 1990 that accumulators can be used totimestampdocuments through cryptographic chaining. (This concept anticipates the modern notion of a cryptographicblockchain.)[1][2][6]Benaloh and de Mare proposed an alternative scheme in 1991 based on discretizing time into rounds.[1][7] Benaloh and de Mare showed that accumulators can be used so that a large group of people can recognize each other at a later time (which Fazio and Nicolosi call an "ID Escrow" situation). Each person selects ay{\displaystyle y}representing their identity, and the group collectively selects a public accumulatorh{\displaystyle h}and a secretx{\displaystyle x}. Then, the group publishes or saves the hash function and the accumulated hash of all the group's identities w.r.t the secretx{\displaystyle x}and public accumulator; simultaneously, each member of the group keeps both its identity valuey{\displaystyle y}and the accumulated hash of all the group's identitiesexcept that of the member. (If the large group of people do not trust each other, or if the accumulator has a cryptographic trapdoor as in the case of the RSA-inspired accumulator, then they can compute the accumulated hashes bysecure multiparty computation.) To verify that a claimed member did indeed belong to the group later, they present their identity and personal accumulated hash (or azero-knowledge proofthereof); by accumulating the identity of the claimed member and checking it against the accumulated hash of the entire group, anyone can verify a member of the group.[1][2]With a dynamic accumulator scheme, it is additionally easy to add or remove members afterward.[2][4] Cryptographic accumulators can also be used to construct other cryptographically securedata structures: The concept has received renewed interest due to theZerocoinadd on tobitcoin, which employs cryptographic accumulators to eliminate trackable linkage in the bitcoin blockchain, which would make transactions anonymous and more private.[10][11][12]More concretely, tomint(create) a Zerocoin, one publishes a coin and acryptographic commitmentto a serial number with a secret random value (which all users will accept as long as it is correctly formatted); tospend(reclaim) a Zerocoin, one publishes the Zerocoin's serial number along with anon-interactive zero-knowledge proofthat they know of some published commitment that relates to the claimed serial number, then claims the coin (which all users will accept as long as the NIZKP is valid and the serial number has not appeared before).[10][11]Since the initial proposal of Zerocoin, it has been succeeded by theZerocashprotocol and is currently being developed intoZcash, a digital currency based on Bitcoin's codebase.[13][14]
https://en.wikipedia.org/wiki/Accumulator_(cryptography)
Inmathematics, thePythagorean theoremorPythagoras' theoremis a fundamental relation inEuclidean geometrybetween the three sides of aright triangle. It states that the area of thesquarewhose side is thehypotenuse(the side opposite theright angle) is equal to the sum of the areas of the squares on the other two sides. Thetheoremcan be written as anequationrelating the lengths of the sidesa,band the hypotenusec, sometimes called thePythagorean equation:[1] The theorem is named for theGreekphilosopherPythagoras, born around 570 BC. The theorem has beenprovednumerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including bothgeometricproofs andalgebraicproofs, with some dating back thousands of years. WhenEuclidean spaceis represented by aCartesian coordinate systeminanalytic geometry,Euclidean distancesatisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points. The theorem can begeneralizedin various ways: tohigher-dimensional spaces, tospaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all butn-dimensionalsolids. In one rearrangement proof, two squares are used whose sides have a measure ofa+b{\displaystyle a+b}and which contain four right triangles whose sides area,bandc, with the hypotenuse beingc. In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are lengthc. Each outer square has an area of(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+c2{\displaystyle 2ab+c^{2}}, with2ab{\displaystyle 2ab}representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of lengthaandb. These rectangles in their new position have now delineated two new squares, one having side lengthais formed in the bottom-left corner, and another square of side lengthbformed in the top-right corner. In this new position, this left side now has a square of area(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}.Since both squares have the area of(a+b)2{\displaystyle (a+b)^{2}}it follows that the other measure of the square area also equal each other such that2ab+c2{\displaystyle 2ab+c^{2}}=2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}. With the area of the four triangles removed from both side of the equation what remains isa2+b2=c2.{\displaystyle a^{2}+b^{2}=c^{2}.}[2] In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areasa2{\displaystyle a^{2}}andb2{\displaystyle b^{2}}which will again lead to a second square of with the area2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}. English mathematicianSir Thomas Heathgives this proof in his commentary on Proposition I.47 inEuclid'sElements, and mentions the proposals of German mathematiciansCarl Anton BretschneiderandHermann Hankelthat Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him."[3]Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues.[4] The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with sidec, as shown in the lower part of the diagram.[5]This results in a larger square, with sidea+band area(a+b)2. The four triangles and the square sidecmust have the same area as the larger square, giving A similar proof uses four copies of a right triangle with sidesa,bandc, arranged inside a square with sidecas in the top half of the diagram.[6]The triangles are similar with area12ab{\displaystyle {\tfrac {1}{2}}ab}, while the small square has sideb−aand area(b−a)2. The area of the large square is therefore But this is a square with sidecand areac2, so This theorem may have more known proofs than any other (thelawofquadratic reciprocitybeing another contender for that distinction); the bookThe Pythagorean Propositioncontains 370 proofs.[7] This proof is based on theproportionalityof the sides of threesimilartriangles, that is, upon the fact that theratioof any two corresponding sides of similar triangles is the same regardless of the size of the triangles. LetABCrepresent a right triangle, with theright anglelocated atC, as shown on the figure. Draw thealtitudefrom pointC, and callHits intersection with the sideAB. PointHdivides the length of the hypotenusecinto partsdande. The new triangle,ACH,issimilarto triangleABC, because they both have a right angle (by definition of the altitude), and they share the angle atA, meaning that the third angle will be the same in both triangles as well, marked asθin the figure. By a similar reasoning, the triangleCBHis also similar toABC. The proof of similarity of the triangles requires thetriangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to theparallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides: The first result equates thecosinesof the anglesθ, whereas the second result equates theirsines. These ratios can be written as Summing these two equalities results in which, after simplification, demonstrates the Pythagorean theorem: The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. Oneconjectureis that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in theElements, and that the theory of proportions needed further development at that time.[8] Albert Einsteingave a proof by dissection in which the pieces do not need to be moved.[9]Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and twosimilarshapes that each include one of two legs instead of the hypotenuse (seeSimilar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well. In outline, here is how the proof inEuclid'sElementsproceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to becongruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow. LetA,B,Cbe theverticesof a right triangle, with a right angle atA. Drop a perpendicular fromAto the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. For the formal proof, we require four elementarylemmata: Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square.[10] The proof is as follows: This proof, which appears in Euclid'sElementsas that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares.[12][13]This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used.[14][15] Another by rearrangement is given by the middle animation. A large square is formed with areac2, from four identical right triangles with sidesa,bandc, fitted around a small central square. Then two rectangles are formed with sidesaandbby moving the triangles. Combining the smaller square with these rectangles produces two squares of areasa2andb2, which must have the same area as the initial large square.[16] The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is calleddissection. This shows the area of the large square equals that of the two smaller ones.[17] As shown in the accompanying animation, area-preservingshear mappingsand translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly.[18]Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse. A relatedproof by U.S. President James A. Garfieldwas published before he was elected president; while he was aU.S. Representative.[19][20][21]Instead of a square it uses atrapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. Thearea of the trapezoidcan be calculated to be half the area of the square, that is The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of12{\displaystyle {\frac {1}{2}}}, which is removed by multiplying by two to give the result. One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employingcalculus.[22][23][24] The triangleABCis a right triangle, as shown in the upper part of the diagram, withBCthe hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of lengthy, the sideACof lengthxand the sideABof lengtha, as seen in the lower diagram part. Ifxis increased by a small amountdxby extending the sideACslightly toD, thenyalso increases bydy. These form two sides of a triangle,CDE, which (withEchosen soCEis perpendicular to the hypotenuse) is a right triangle approximately similar toABC. Therefore, the ratios of their sides must be the same, that is: This can be rewritten asydy=xdx{\displaystyle y\,dy=x\,dx}, which is adifferential equationthat can be solved by direct integration: giving The constant can be deduced fromx= 0,y=ato give the equation This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place ofdxanddy. Theconverseof the theorem is also true:[25] Given a triangle with sides of lengtha,b, andc, ifa2+b2=c2,then the angle between sidesaandbis aright angle. For any three positivereal numbersa,b, andcsuch thata2+b2=c2, there exists a triangle with sidesa,bandcas a consequence of theconverse of the triangle inequality. This converse appears in Euclid'sElements(Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right."[26] It can be proved using thelaw of cosinesor as follows: LetABCbe a triangle with side lengthsa,b, andc, witha2+b2=c2.Construct a second triangle with sides of lengthaandbcontaining a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has lengthc=√a2+b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengthsa,bandc, the triangles arecongruentand must have the same angles. Therefore, the angle between the side of lengthsaandbin the original triangle is a right angle. The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem.[27][28] Acorollaryof the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Letcbe chosen to be the longest of the three sides anda+b>c(otherwise there is no triangle according to thetriangle inequality). The following statements apply:[29] Edsger W. Dijkstrahas stated this proposition about acute, right, and obtuse triangles in this language: whereαis the angle opposite to sidea,βis the angle opposite to sideb,γis the angle opposite to sidec, and sgn is thesign function.[30] A Pythagorean triple has three positive integersa,b, andc, such thata2+b2=c2.In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths.[1]Such a triple is commonly written(a,b,c).Some well-known examples are(3, 4, 5)and(5, 12, 13). A primitive Pythagorean triple is one in whicha,bandcarecoprime(thegreatest common divisorofa,bandcis 1). The following is a list of primitive Pythagorean triples with values less than 100: There are manyformulas for generating Pythagorean triples. Of these,Euclid's formulais the most well-known: given arbitrary positive integersmandn, the formula states that the integers forms a Pythagorean triple. Given aright trianglewith sidesa,b,c{\displaystyle a,b,c}andaltituded{\displaystyle d}(a line from the right angle and perpendicular to thehypotenusec{\displaystyle c}). The Pythagorean theorem has, while theinverse Pythagorean theoremrelates the twolegsa,b{\displaystyle a,b}to the altituded{\displaystyle d},[31] The equation can be transformed to, wherex2+y2=z2{\displaystyle x^{2}+y^{2}=z^{2}}for any non-zerorealx,y,z{\displaystyle x,y,z}. If thea,b,d{\displaystyle a,b,d}are to beintegers, the smallest solutiona>b>d{\displaystyle a>b>d}is then using the smallest Pythagorean triple3,4,5{\displaystyle 3,4,5}. The reciprocal Pythagorean theorem is a special case of theoptic equation where the denominators are squares and also for aheptagonal trianglewhose sidesp,q,r{\displaystyle p,q,r}are square numbers. One of the consequences of the Pythagorean theorem is that line segments whose lengths areincommensurable(so the ratio of which is not arational number) can be constructed using astraightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by thesquare rootoperation. The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer.[32]Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as√2,√3,√5. For more detail, seeQuadratic irrational. Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit.[33]According to one legend,Hippasus of Metapontum(ca.470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable.[34]A careful discussion of Hippasus's contributions is found inFritz.[35] For anycomplex number theabsolute valueor modulus is given by So the three quantities,r,xandyare related by the Pythagorean equation, Note thatris defined to be a positive number or zero butxandycan be negative as well as positive. Geometricallyris the distance of thezfrom zero or the originOin thecomplex plane. This can be generalised to find the distance between two points,z1andz2say. The required distance is given by so again they are related by a version of the Pythagorean equation, The distance formula inCartesian coordinatesis derived from the Pythagorean theorem.[36]If(x1,y1)and(x2,y2)are points in the plane, then the distance between them, also called theEuclidean distance, is given by More generally, inEuclideann-space, the Euclidean distance between two points,A=(a1,a2,…,an){\displaystyle A\,=\,(a_{1},a_{2},\dots ,a_{n})}andB=(b1,b2,…,bn){\displaystyle B\,=\,(b_{1},b_{2},\dots ,b_{n})}, is defined, by generalization of the Pythagorean theorem, as: If instead of Euclidean distance, the square of this value (thesquared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates: The squared form is a smooth,convex functionof both points, and is widely used inoptimization theoryandstatistics, forming the basis ofleast squares. If Cartesian coordinates are not used, for example, ifpolar coordinatesare used in two dimensions or, in more general terms, ifcurvilinear coordinatesare used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in theapplications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates(r,θ)can be introduced as: Then two points with locations(r1,θ1)and(r2,θ2)are separated by a distances: Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as: using the trigonometricproduct-to-sum formulas. This formula is thelaw of cosines, sometimes called the generalized Pythagorean theorem.[37]From this result, for the case where the radii to the two locations are at right angles, the enclosed angleΔθ=π/2,and the form corresponding to Pythagoras' theorem is regained:s2=r12+r22.{\displaystyle s^{2}=r_{1}^{2}+r_{2}^{2}.}The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles. In a right triangle with sidesa,band hypotenusec,trigonometrydetermines thesineandcosineof the angleθbetween sideaand the hypotenuse as: From that it follows: where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity.[38]In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of sizesinθand adjacent side of sizecosθin units of the hypotenuse. The Pythagorean theorem relates thecross productanddot productin a similar way:[39] This can be seen from the definitions of the cross product and dot product, as withnaunit vectornormal to bothaandb. The relationship follows from these definitions and the Pythagorean trigonometric identity. This can also be used to define the cross product. By rearranging the following equation is obtained This can be considered as a condition on the cross product and so part of its definition, for example inseven dimensions.[40][41] If the first four of theEuclidean geometry axiomsare assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is,Euclid's fifth postulateimplies the Pythagorean theorem and vice-versa. The Pythagorean theorem generalizes beyond the areas of squares on the three sides to anysimilar figures. This was known byHippocrates of Chiosin the 5th century BC,[42]and was included byEuclidin hisElements:[43] If one erects similar figures (seeEuclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side. This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures area:b:c).[44]While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle).[44] The basic idea behind this generalization is that the area of a plane figure isproportionalto the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areasA,BandCare erected on sides with corresponding lengthsa,bandcthen: But, by the Pythagorean theorem,a2+b2=c2, soA+B=C. Conversely, if we can prove thatA+B=Cfor three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangleCon its hypotenuse, and two similar right triangles (AandB) constructed on the other two sides, formed by dividing the central triangle by itsaltitude. The sum of the areas of the two smaller triangles therefore is that of the third, thusA+B=Cand reversing the above logic leads to the Pythagorean theorema2+b2=c2. (See alsoEinstein's proof by dissection without rearrangement) The Pythagorean theorem is a special case of the more general theorem relating the lengths of sides in any triangle, the law of cosines, which states thata2+b2−2abcos⁡θ=c2{\displaystyle a^{2}+b^{2}-2ab\cos {\theta }=c^{2}}whereθ{\displaystyle \theta }is the angle between sidesa{\displaystyle a}andb{\displaystyle b}.[45] Whenθ{\displaystyle \theta }isπ2{\displaystyle {\frac {\pi }{2}}}radians or 90°, thencos⁡θ=0{\displaystyle \cos {\theta }=0}, and the formula reduces to the usual Pythagorean theorem. At any selected angle of a general triangle of sidesa, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeledc. Inscribing the isosceles triangle forms triangleCADwith angle θ opposite sideband with sideralongc. A second triangle is formed with angle θ opposite sideaand a side with lengthsalongc, as shown in the figure.Thābit ibn Qurrastated that the sides of the three triangles were related as:[47][48] As the angle θ approachesπ/2, the base of the isosceles triangle narrows, and lengthsrandsoverlap less and less. Whenθ=π/2,ADBbecomes a right triangle,r+s=c, and the original Pythagorean theorem is regained. One proof observes that triangleABChas the same angles as triangleCAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by thetriangle postulate.) Consequently,ABCis similar to the reflection ofCAD, the triangleDACin the lower panel. Taking the ratio of sides opposite and adjacent to θ, Likewise, for the reflection of the other triangle, Clearing fractionsand adding these two relations: the required result. The theorem remains valid if the angleθ{\displaystyle \theta }is obtuse so the lengthsrandsare non-overlapping. Pappus's area theoremis a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization byPappus of Alexandriain 4 AD[49][50] The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same baseband heighth. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms. In terms ofsolid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider thecuboidshown in the figure. The length offace diagonalACis found from Pythagoras' theorem as: where these three sides form a right triangle. Using diagonalACand the horizontal edgeCD, the length ofbody diagonalADthen is found by a second application of Pythagoras' theorem as: or, doing it all in one step: This result is the three-dimensional expression for the magnitude of a vectorv(the diagonal AD) in terms of its orthogonal components{vk}(the three mutually perpendicular sides): This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes. A substantial generalization of the Pythagorean theorem to three dimensions isde Gua's theorem, named forJean Paul de Gua de Malves: If atetrahedronhas a right angle corner (like a corner of acube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem":[51] Letx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}be orthogonal vectors inRn. Consider then-dimensional simplexSwith vertices0,x1,…,xn{\displaystyle 0,x_{1},\ldots ,x_{n}}. (Think of the(n− 1)-dimensional simplex with verticesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}not including the origin as the "hypotenuse" ofSand the remaining(n− 1)-dimensional faces ofSas its "legs".) Then the square of the volume of the hypotenuse ofSis the sum of the squares of the volumes of thenlegs. This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording:[52] Given ann-rectangularn-dimensional simplex, the square of the(n− 1)-content of thefacetopposing the right vertex will equal the sum of the squares of the(n− 1)-contents of the remaining facets. The Pythagorean theorem can be generalized toinner product spaces,[53]which are generalizations of the familiar 2-dimensional and 3-dimensionalEuclidean spaces. For example, afunctionmay be considered as avectorwith infinitely many components in an inner product space, as infunctional analysis.[54] In an inner product space, the concept ofperpendicularityis replaced by the concept oforthogonality: two vectorsvandware orthogonal if their inner product⟨v,w⟩{\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle }is zero. Theinner productis a generalization of thedot productof vectors. The dot product is called thestandardinner product or theEuclideaninner product. However, other inner products are possible.[55] The concept of length is replaced by the concept of thenorm‖v‖ of a vectorv, defined as:[56] In an inner-product space, thePythagorean theoremstates that for any two orthogonal vectorsvandwwe have Here the vectorsvandware akin to the sides of a right triangle with hypotenuse given by thevector sumv+w. This form of the Pythagorean theorem is a consequence of theproperties of the inner product: where⟨v,w⟩=⟨w,v⟩=0{\displaystyle \langle \mathbf {v,\ w} \rangle =\langle \mathbf {w,\ v} \rangle =0}because of orthogonality. A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is theparallelogram law:[56] which says that twice the sum of the squares of the lengths of the sides of a parallelogram is the sum of the squares of the lengths of the diagonals. Any norm that satisfies this equality isipso factoa norm corresponding to an inner product.[56] The Pythagorean identity can be extended to sums of more than two orthogonal vectors. Ifv1,v2, ...,vnare pairwise-orthogonal vectors in an inner-product space, then application of the Pythagorean theorem to successive pairs of these vectors (as described for 3-dimensions in the section onsolid geometry) results in the equation[57] Another generalization of the Pythagorean theorem applies toLebesgue-measurablesets of objects in any number of dimensions. Specifically, the square of the measure of anm-dimensional set of objects in one or more parallelm-dimensionalflatsinn-dimensionalEuclidean spaceis equal to the sum of the squares of the measures of theorthogonalprojections of the object(s) onto allm-dimensional coordinate subspaces.[58] In mathematical terms: where: The Pythagorean theorem is derived from theaxiomsofEuclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theoremimplies, and is implied by, Euclid's Parallel (Fifth) Postulate.[59][60]Thus, right triangles in anon-Euclidean geometry[61]do not satisfy the Pythagorean theorem. For example, inspherical geometry, all three sides of the right triangle (saya,b, andc) bounding an octant of the unit sphere have length equal toπ/2, and all its angles are right angles, which violates the Pythagorean theorem becausea2+b2=2c2>c2{\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}}. Here two cases of non-Euclidean geometry are considered—spherical geometryandhyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, sayA+B=C. The sides are then related as follows: the sum of the areas of the circles with diametersaandbequals the area of the circle with diameterc.[62] For any righttriangle on a sphereof radiusR(for example, ifγin the figure is a right angle), with sidesa,b,c,the relation between the sides takes the form:[63] This equation can be derived as a special case of thespherical law of cosinesthat applies to all spherical triangles: For infinitesimal triangles on the sphere (or equivalently, for finite spherical triangles on a sphere of infinite radius), the spherical relation between the sides of a right triangle reduces to the Euclidean form of the Pythagorean theorem. To see how, assume we have a spherical triangle of fixed side lengthsa,b,andcon a sphere with expanding radiusR. AsRapproaches infinity the quantitiesa/R,b/R,andc/Rtend to zero and the spherical Pythagorean identity reduces to1=1,{\displaystyle 1=1,}so we must look at itsasymptotic expansion. TheMaclaurin seriesfor the cosine function can be written ascos⁡x=1−12x2+O(x4){\textstyle \cos x=1-{\tfrac {1}{2}}x^{2}+O{\left(x^{4}\right)}}with the remainder term inbig O notation. Lettingx=c/R{\displaystyle x=c/R}be a side of the triangle, and treating the expression as an asymptotic expansion in terms ofRfor a fixedc, and likewise foraandb. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields Subtracting 1 and then negating each side, Multiplying through by2R2,the asymptotic expansion forcin terms of fixeda,band variableRis The Euclidean Pythagorean relationshipc2=a2+b2{\textstyle c^{2}=a^{2}+b^{2}}is recovered in the limit, as the remainder vanishes when the radiusRapproaches infinity. For practical computation in spherical trigonometry with small right triangles, cosines can be replaced with sines using the double-angle identitycos⁡2θ=1−2sin2⁡θ{\displaystyle \cos {2\theta }=1-2\sin ^{2}{\theta }}to avoidloss of significance. Then the spherical Pythagorean theorem can alternately be written as In ahyperbolicspace with uniformGaussian curvature−1/R2, for a righttrianglewith legsa,b, and hypotenusec, the relation between the sides takes the form:[64] where cosh is thehyperbolic cosine. This formula is a special form of thehyperbolic law of cosinesthat applies to all hyperbolic triangles:[65] with γ the angle at the vertex opposite the sidec. By using theMaclaurin seriesfor the hyperbolic cosine,coshx≈ 1 +x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, asa,b, andcall approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem. For small right triangles(a,b<<R), the hyperbolic cosines can be eliminated to avoidloss of significance, giving For any uniform curvatureK(positive, zero, or negative), in very small right triangles (|K|a2, |K|b2<< 1) with hypotenusec, it can be shown that The Pythagorean theorem applies toinfinitesimaltriangles seen indifferential geometry. In three dimensional space, the distance between two infinitesimally separated points satisfies withdsthe element of distance and (dx,dy,dz) the components of the vector separating the two points. Such a space is called aEuclidean space. However, inRiemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form:[66] which is called themetric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficientsgij.) It may be a function of position, and often describescurved space. A simple example is Euclidean (flat) space expressed incurvilinear coordinates. For example, inpolar coordinates: There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians ofMesopotamianmathematics have concluded that the Pythagorean rule was in widespread use during theOld Babylonian period(20th to 16th centuries BC), over a thousand years beforePythagoraswas born.[68][69][70][71]The history of the theorem can be divided into four parts: knowledge ofPythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within somedeductive system. Writtenc.1800BC, theEgyptianMiddle KingdomBerlin Papyrus 6619includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tabletPlimpton 322, written nearLarsaalsoc.1800BC, contains many entries closely related to Pythagorean triples.[72] InIndia, theBaudhayanaShulba Sutra, the dates of which are given variously as between the 8th and 5th century BC,[73]contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of theisoscelesright triangleand in the general case, as does theApastambaShulba Sutra(c.600 BC).[a] ByzantineNeoplatonicphilosopher and mathematicianProclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed toPlato, the other to Pythagoras",[76]for generating special Pythagorean triples. The rule attributed to Pythagoras (c.570– c.495 BC) starts from anodd numberand produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According toThomas L. Heath(1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived.[77]However, when authors such asPlutarchandCiceroattributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted.[78][79]ClassicistKurt von Fritzwrote, "Whether this formula is rightly attributed to Pythagoras personally ... one can safely assume that it belongs to the very oldest period ofPythagorean mathematics."[35]Around 300 BC, in Euclid'sElements, the oldest extantaxiomatic proofof the theorem is presented.[80] With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, theChinesetextZhoubi Suanjing(周髀算经), (The Arithmetical Classic of theGnomonand the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle — in China it is called the "Gougu theorem" (勾股定理).[81][82]During theHan Dynasty(202 BC to 220 AD), Pythagorean triples appear inThe Nine Chapters on the Mathematical Art,[83]together with a mention of right triangles.[84]Some believe the theorem arose first inChinain the 11th century BC,[85]where it is alternatively known as the "Shang Gao theorem" (商高定理),[86]named after theDuke of Zhou'sastronomer and mathematician, whose reasoning composed most of what was in theZhoubi Suanjing.[87]
https://en.wikipedia.org/wiki/Pythagorean_theorem
This is a comprehensive list ofvolunteer computingprojects, which are a type ofdistributed computingwhere volunteers donate computing time to specific causes. The donated computing power comes from idleCPUsandGPUsinpersonal computers,video game consoles,[1]andAndroid devices. Each project seeks to utilize the computing power of many internet connected devices to solve problems and perform tedious, repetitive research in a very cost effective manner. 2014-06-01[139] 2011-08-23[146] 2017-04-07[151] 2022-09[155] 2016[164] 2017[169] 2024-01 2018-04-20[180] 2014-06[183] 2014-05-23[187] 2012[197] 2004-03-08[198] 2011-06[200] 2001-09-03[202] 2010-02-21 2011-02[208] 2016-07[211] 2018-06-05[217] 2017-03[220] 2009-11-15[224] 2016-01-28[230] 2022-10-02[238] 2010 2016[248] 2013-02-16[251] 2 Spy Hill Research[254] Tested BOINC's forum software for possible use byInteractions in Understanding the Universe[256] 2016-10-04[258] 2020-05-31[264] 2013-01 2023-06-01[270] 2022-09-29 Merged with PrimeGrid. 2012-08 2017-02[281] 2012-08[284] 2020-03-31[289] 2020-03-31[289] 2014-05-30 2011-09-28[297] 2018-01[300] 2013-09-05 2018-05-02[306] 2010-11[311] 2019-03[314] 2019-03[314] 2017[319]
https://en.wikipedia.org/wiki/List_of_volunteer_computing_projects
Mozilla Firefox, or simplyFirefox, is afree and open-source[12]web browserdeveloped by theMozilla Foundationand its subsidiary, theMozilla Corporation. It uses theGeckorendering engineto display web pages, which implements current and anticipated web standards.[13]Firefox is available forWindows 10or later versions ofWindows,macOS, andLinux.Its unofficial portsare available for variousUnixandUnix-likeoperating systems, includingFreeBSD,[14]OpenBSD,[15]NetBSD,[16]and other operating systems, such asreactOS. Firefox is also available forAndroidandiOS. However, as with all other iOS web browsers, the iOS version uses theWebKitlayout engine instead of Gecko due to platform requirements. An optimized version is also available on theAmazon Fire TVas one of the two main browsers available withAmazon's Silk Browser.[17] Firefox is thespiritual successorofNetscape Navigator, as theMozillacommunity was created byNetscapein 1998, before its acquisition byAOL.[18]Firefox was created in 2002 under the codename "Phoenix" by members of the Mozilla community who desired a standalone browser rather than theMozilla Application Suitebundle. During itsbetaphase, it proved to be popular with its testers and was praised for its speed, security, and add-ons compared toMicrosoft's then-dominantInternet Explorer 6. It was released on November 9, 2004,[19]and challengedInternet Explorer's dominance with 60 million downloads within nine months.[20]In November 2017, Firefox began incorporating new technology under the code name "Quantum" to promoteparallelismand a more intuitiveuser interface.[21] Firefox usage share grew to a peak of 32.21% in November 2009,[22]withFirefox 3.5overtakingInternet Explorer 7, although not all versions of Internet Explorer as a whole;[23][24]its usage then declined in competition withGoogle Chrome.[22]As of February 2025[update], according toStatCounter, it had a 6.36%usage shareon traditional PCs (i.e. as a desktop browser), making it the fourth-most popular PC web browser after Google Chrome (65%),Microsoft Edge(14%), andSafari(8.65%).[25] The project began as an experimental branch of theMozilla projectbyDave Hyatt,Joe Hewitt, andBlake Ross. They believed the commercial requirements ofNetscape's sponsorship and developer-drivenfeature creepcompromised the utility of the Mozilla browser.[26]To combat what they saw as theMozilla Suite'ssoftware bloat, they created a standalone browser, with which they intended to replace the Mozilla Suite.[27]Version 0.1 was released on September 23, 2002.[28]On April 3, 2003, theMozilla Organizationannounced that it planned to change its focus from the Mozilla Suite to Firefox andThunderbird.[29] The Firefox project has undergone several name changes.[30]The nascent browser was originally named Phoenix, after themythical birdthat rose triumphantly from the ashes of its dead predecessor (in this case, from the "ashes" ofNetscape Navigator, after it was sidelined by Microsoft Internet Explorer in the "First Browser War"). Phoenix was renamed in 2003 due to a trademark claim fromPhoenix Technologies. The replacement name, Firebird, provoked an intense response from theFirebirddatabase software project.[31][32]The Mozilla Foundation reassured them that the browser would always bear the name Mozilla Firebird to avoid confusion. After further pressure, Mozilla Firebird became Mozilla Firefox on February 9, 2004.[33]The name Firefox was said to be derived from a nickname of thered panda,[34]which became the mascot for the newly named project.[35]For the abbreviation of Firefox, Mozilla prefersFxorfx,although it is often abbreviated asFF[36]or Ff. The Firefox project went through many versions before version 1.0 and had already gained a great deal of acclaim from numerous media outlets, such asForbes[37]andThe Wall Street Journal.[38]Among Firefox's popular features were the integratedpop-up blocker,tabbed browsing, and an extension mechanism for adding functionality. Although these features have already been available for some time in other browsers such as theMozilla SuiteandOpera, Firefox was the first of these browsers to have achieved large-scale adoption so quickly.[39]Firefox attracted attention as an alternative toInternet Explorer, which had come under fire for its alleged poor program design and insecurity—detractors cite IE's lack of support for certain Web standards, use of the potentially dangerousActiveXcomponent, and vulnerability to spyware and malware installation.[citation needed]Microsoft responded by releasingWindows XPService Pack 2, which added several important security features to Internet Explorer 6.[40] Version 1.0 of Firefox was released on November 9, 2004.[41]This was followed by version 1.5 in November 2005, version 2.0 in October 2006, version 3.0 in June 2008, version 3.5 in June 2009, version 3.6 in January 2010, and version 4.0 in March 2011. From version 5 onwards, the development and release model changed into a "rapid" one; by the end of 2011 the stable release was version 9, and by the end of 2012 it reached version 17.[42] In 2016, Mozilla announced a project known asQuantum, which sought to improve Firefox's Gecko engine and other components to improve the browser's performance, modernize its architecture, and transition the browser to amulti-processmodel. These improvements came in the wake of decreasing market share toGoogle Chrome, as well as concerns that its performance was lapsing in comparison. Despite its improvements, these changes required existingadd-onsfor Firefox to be made incompatible with newer versions, in favor of a newextensionsystem that is designed to be similar to Chrome and other recent browsers. Firefox 57, which was released in November 2017, was the first version to contain enhancements from Quantum, and has thus been namedFirefox Quantum. A Mozilla executive stated that Quantum was the "biggest update" to the browser since version 1.0.[43][44][45]Unresponsive and crashing pages only affect other pages loaded within the same process. While Chrome uses separate processes for each loaded tab, Firefox distributes tabs over four processes by default (since Quantum), in order to balance memory consumption and performance. The process count can be adjusted, where more processes increase performance at the cost of memory, therefore suitable for computers with larger RAM capacity.[46][47] On May 3, 2019, the expiry of an intermediate signing certificate on Mozilla servers caused Firefox to automatically disable and lock all browser extensions (add-ons).[48][49]Mozilla began the roll-out of a fix shortly thereafter, using their Mozilla Studies component.[48][49] Support forAdobe Flashwas dropped on January 6, 2021, with the release of Firefox 85.[50] On June 1, 2021, Firefox's 'Proton' redesign was offered through its stable release channel[51]after being made available in the beta builds.[52]While users were initially allowed to revert to the old design throughabout:config, the correspondingkey-value pairsreportedly stopped working in later builds, resulting in criticism.[53]These included accessibility concerns[54][55]despite Mozilla's claim to "continue to work with the accessibility community"[56]and had not been resolved as of October 2024[update].[57] On January 13, 2022, an issue with Firefox's HTTP/3 implementation resulted in a widespread outage for several hours.[58] On September 26, 2023, Firefox 118.0 introduced on-device translation of web page content.[59] On January 23, 2024, along with the release of Firefox 122.0, Mozilla introduced an officialAPT repositoryforDebian-basedLinux distributions.[60] Features of the desktop edition includetabbed browsing, full-screen mode,spell checking,incremental search,smart bookmarks, bookmarking and downloading throughdrag and drop,[61][62]adownload manager,user profilemanagement,[63]private browsing, bookmark tags, bookmarkexporting,[64]offline mode,[65]ascreenshottool,web development tools, a "page info" feature which shows a list of page metadata and multimedia items,[66]a configuration menu atabout:configforpower users, and location-aware browsing (also known as "geolocation") based on a Google service.[67]Firefox has an integrated search system which uses Google by default in most markets.[68][69]DNS over HTTPSis another feature whose default behaviour is determined geographically.[70] Firefox provides an environment for web developers in which they can use built-in tools, such as the Error Console or theDOM Inspector, andextensions, such asFirebugand more recently there has been an integration feature withPocket. Firefox Hello was an implementation ofWebRTC, added in October 2014, which allows users of Firefox and other compatible systems to have a video call, with the extra feature of screen and file sharing by sending a link to each other. Firefox Hello was scheduled to be removed in September 2016.[71] Former features include aFile Transfer Protocol(FTP) client for browsing file servers, the ability to block images from individual domains (until version 72),[72]a3D page inspector(versions 11 to 46), tab grouping (until version 44), and the ability to add customized extra toolbars (until version 28).[73][74][75] Functions can be added throughadd-onscreated bythird-party developers. Add-ons are primarily coded using anHTML,CSS,JavaScript, withAPIknown asWebExtensions, which is designed to be compatible withGoogle ChromeandMicrosoft Edgeextension systems.[76]Firefox previously supported add-ons using theXULandXPCOMAPIs, which allowed them to directly access and manipulate much of the browser's internal functionality. As compatibility was not included in the multi-process architecture, XUL add-ons have been deemedLegacy add-onsand are no longer supported on Firefox 57 "Quantum" and newer.[77][78] Mozilla has occasionally installed extensions for users without their permission. This happened in 2017 when an extension designed to promote the showMr. Robotwas silently added in an update to Firefox.[79][80] Firefox can have themes added to it, which users can create or download from third parties to change the appearance of the browser.[81][82]Firefox also provides dark, light, and system themes. In 2013, Firefox for Android added aguest sessionmode, which wiped browsing data such as tabs, cookies, and history at the end of each guest session. Guest session data was kept even when restarting the browser or device, and deleted only upon a manual exit. The feature was removed in 2019, purportedly to "streamline the experience".[83][84] Firefox implements manyweb standards, includingHTML4(almost fullHTML5),XML,XHTML,MathML,SVG1.1 (full),[85]SVG 2 (partial),[86][87]CSS(with extensions),[88]ECMAScript (JavaScript),DOM,XSLT,XPath, andAPNG(AnimatedPNG) images withalpha transparency.[89]Firefox also implements standards proposals created by theWHATWGsuch as client-side storage,[90][91]and thecanvas element.[92]These standards are implemented through the Gecko layout engine, andSpiderMonkeyJavaScript engine. Firefox 4 was the first release to introduce significant HTML5 and CSS3 support. Firefox has passed theAcid2standards-compliance test since version 3.0.[93]Mozilla had originally stated that they did not intend for Firefox to pass theAcid3test fully because they believed that the SVG fonts part of the test had become outdated and irrelevant, due toWOFFbeing agreed upon as a standard by all major browser makers.[94]Because the SVG font tests were removed from the Acid3 test in September 2011, Firefox 4 and greater scored 100/100.[95][96] Firefox also implements "Safe Browsing,"[97]aproprietary protocol[98]from Google used to exchange data related with phishing and malware protection. Firefox supports the playback of video content protected byHTML5Encrypted Media Extensions(EME), since version 38. For security and privacy reasons, EME is implemented within a wrapper of open-source code that allows execution of aproprietaryDRMmodule byAdobe Systems—Adobe Primetime Content Decryption Module (CDM). CDM runs within a "sandbox" environment to limit its access to the system and provide it a randomized device ID to prevent services fromuniquely identifying the devicefor tracking purposes. The DRM module, once it has been downloaded, is enabled, and disabled in the same manner as otherplug-ins. Since version 47,[99]"Google's Widevine CDM on Windows and Mac OS X so streaming services likeAmazon Videocan switch fromSilverlightto encrypted HTML5 video" is also supported. Mozilla justified its partnership with Adobe and Google by stating: Firefox downloads and enables the Adobe Primetime and Google Widevine CDMs by default to give users a smooth experience on sites that require DRM. Each CDM runs in a separate container called a sandbox and you will be notified when a CDM is in use. You can also disable each CDM and opt-out of future updates and that it is "an important step on Mozilla's roadmap to removeNPAPIplugin support."[101]Upon the introduction of EME support, builds of Firefox on Windows were also introduced that exclude support for EME.[102][103]TheFree Software FoundationandCory Doctorowcondemned Mozilla's decision to support EME.[104] Firefox has been criticized by web developers for adopting web standard and fixing bugs which are decades old. No support for view transition, gradient and CSS features lack is also criticized.[105]Firefox scores less on bothHTML5 Testand JetStream2 compared to rival browsers.[106][107] Other issues include high battery usage, being highly resource intensive,[108]removal of tab group, use of telemetry, ads in search bar, dated download system, lack ofPWA,[109]and lack of ability to share text fragment.[110][111] From its inception, Firefox was positioned as a security-focused browser. At the time,Internet Explorer, the dominant browser, was facing a security crisis. Multiple vulnerabilities had been found, andmalwarelikeDownload.Jectcould be installed simply by visiting a compromised website. The situation was so bad that the US Government issued a warning against using Internet Explorer.[112]Firefox, being less integrated with the operating system, was considered a safer alternative since it was less likely to have issues that could completely compromise a computer. This led to a significant increase in Firefox's popularity during the early 2000s as a more secure alternative.[113][114]Moreover, Firefox was considered to have fewer actively exploitablesecurity vulnerabilitiescompared to its competitors. In 2006,The Washington Postreported that exploit code for known security vulnerabilities in Internet Explorer were available for 284 days compared to only nine days for Firefox before the problem was fixed.[115]ASymantecstudy around the same period showed that even though Firefox had a higher number of vulnerabilities, on average vulnerabilities were fixed faster in Firefox than in other browsers during that period.[116] During this period, Firefox used amonolithic architecture, like most browsers at the time. This meant all browser components ran in a singleprocesswith access to allsystem resources. This setup had multiple security issues. If a web page used too many resources, the entire Firefox process would hang or crash, affecting all tabs. Additionally, any exploit could easily access system resources, including user files. Between 2008 and 2012, most browsers shifted to a multiprocess architecture, isolating high-risk processes like rendering, media, GPU, and networking.[117]However, Firefox was slower to adopt this change. It wasn't until 2015 that Firefox started its Electrolysis (e10s) project to implement sandboxing across multiple components. This rewrite relied oninterprocess communicationusingChromium's interprocess communication library and placed various component including the rendering component in its own sandbox.[118]Firefox released this rewrite in to beta in August 2016, noting a 10–20% increase in memory usage, which was lower than Chrome's at the time.[119]However, the rewrite caused issues with their legacy extension API, which was not designed to work cross-process and requiredshim codeto function correctly.[119]After over a year in beta, the rewrite was enabled by default all users of Firefox in November 2017.[120] In 2012, Mozilla launched a new project calledServoto write a completely new and experimental browser engine utilizingmemory safetechniques written inRust.[121]In 2018, Mozilla opted to integrate parts of the Servo project into theGecko enginein a project codenamed the Quantum project.[122]The project completely overhauled Firefox's page rendering code resulting in performance and stability gains while also improving the security of existing components.[123]Additionally, the older incompatible extension API was removed in favour of a WebExtension API that more closely resembled Google Chrome's extension system. This broke compatibility with older extensions but resulted in fewer vulnerabilities and a much more maintainable extension system.[124]While the Servo project was intended to replace more parts of the Gecko Engine,[125]this plan never came to fruition. In 2020, Mozilla laid off all developers on the Servo team transferring ownership of the project to theLinux Foundation.[126] When Firefox initially released, it used a custom script permission policy where scripts that were signed by the page could gain access to higher privilege actions such as the ability to set a user's preferences. However, this model was not widely used and was later discontinued by Firefox. Modern day Firefox instead follows the standardsame-origin policypermission model that is followed by most modern browsers which disallows scripts from accessing any privileged data including data about other websites.[127] It usesTLSto protect communications with web servers using strongcryptographywhen using theHTTPSprotocol.[128]The freely availableHTTPS Everywhereadd-on enforces HTTPS, even if a regular HTTPURLis entered. Firefox now supports HTTP/2.[129] In February 2013, plans were announced for Firefox 22 to disablethird-party cookiesby default. However, the introduction of the feature was then delayed so Mozilla developers could "collect and analyze data on the effect of blocking some third-party cookies." Mozilla also collaborated withStanford University's "Cookie Clearinghouse" project to develop ablacklistandwhitelistof sites that will be used in the filter.[130][131] Version 23, released in August 2013, followed the lead of its competitors by blockingiframe, stylesheet, and script resources served from non-HTTPS servers embedded on HTTPS pages by default. Additionally,JavaScriptcould also no longer be disabled through Firefox's preferences, and JavaScript was automatically re-enabled for users who upgraded to 23 or higher with it disabled. The change was made due to the fact the JavaScript was being used across a majority of websites on the web and disabling JavaScript could potentially have untoward repercussions on inexperienced users who are unaware of its impact. Firefox also cited the fact that extensions likeNoScript, that can disable JavaScript in a more controlled fashion, were widely available. The following release added the ability to disable JavaScript through the developer tools for testing purposes.[132][133][134] Beginning with Firefox 48, all extensions must be signed by Mozilla to be used in release and beta versions of Firefox. Firefox 43 blocked unsigned extensions but allowed enforcement of extension signing to be disabled. All extensions must be submitted toMozilla Add-onsand be subject to code analysis in order to be signed, although extensions do not have to be listed on the service to be signed.[135][136]On May 2, 2019, Mozilla announced that it would be strengthening the signature enforcement with methods that included the retroactive disabling of old extensions now deemed to be insecure.[137] Since version 60 Firefox includes the option to useDNS over HTTPS(DoH), which causesDNS lookuprequests to be sent encrypted over the HTTPS protocol.[138][139]To use this feature the user must set certain preferences beginning with "network.trr" (Trusted Recursive Resolver) inabout:config: if network.trr.mode is 0, DoH is disabled; 1 activates DoH in addition to unencrypted DNS; 2 causes DoH to be used before unencrypted DNS; to use only DoH, the value must be 3. By setting network.trr.uri to the URL, specialCloudflareservers will be activated. Mozilla has a privacy agreement with this server host that restricts their collection of information about incoming DNS requests.[140] On May 21, 2019, Firefox was updated to include the ability to block scripts that used a computer'sCPUto minecryptocurrencywithout a user's permission, in Firefox version 67.0. The update also allowed users to block knownfingerprintingscripts that track their activity across the web, however it does not resist fingerprinting on its own.[141] In March 2021, Firefox launched SmartBlock in version 87 to offer protection againstcross-site tracking, without breaking the websites users visit.[142]Also known as state partitioning or "total cookie protection", SmartBlock works via a feature in the browser that isolates data from each site visited by the user to ensure that cross-site scripting is very difficult if not impossible. The feature also isolates local storage, service workers and other common ways for sites to store data.[143] In 2025, Mozilla introduced aterms of usefor Firefox, as a means to give more transparency over users' rights and permissions for the browser outside of the Mozilla Public License. The company received criticism centering around a clause that gave Mozilla a "nonexclusive, royalty-free, worldwide license" to use any information that was uploaded or inputted into the browser. The new terms were perceived to reduce privacy, and were seen to be connected to AI, while Mozilla denied that these were the motives.[144]Criticism centered on fears that the license grant covered all data inputted, while Mozilla responded saying that the change "does NOT give us ownership of your data".[145][146]In an attempt to respond to the fallout, Mozilla said that many modified words were to ease readability, increase transparency, formalize existing implicit agreements, and describe the circumstances of a free browser, adding that the AI features are covered by a separate agreement.[146][147]Days later, Mozilla changed the wording of their privacy FAQ,[148]removing a pledge to never "sell your personal data" and revising another section denying allegations that it sold user data, saying that it gathers some information from hideable advertisements as well as chatbot metadata when interacted with, and that the legal definition of "sell" was vague in some jurisdictions.[149][150] Firefox is a widelylocalizedweb browser. Mozilla uses the in-house Pontoon localization platform.[151]The first official release in November 2004 was available in 24 different languages and for 28locales.[152]In 2019, Mozilla released Project Fluent a localization system that allows translators to be more flexible with their translation than to be constrained in one-to-one translation of strings.[153][154]As of April 2025,[update]the supported versions of Firefox are available in 97 locales (88 languages).[9] There are desktop versions of Firefox for Microsoft Windows, macOS, and Linux, whileFirefox for Androidis available for Android (formerly Firefox for mobile, it also ran onMaemo,MeeGoandFirefox OS) andFirefox for iOSis available for iOS. Smartphones thatsupport Linuxbut not Android, or iOS apps can also run Firefox in its desktop version, for example usingpostmarketOS,MobianorUbuntu Touch.[155] Notes Firefox source code may becompiledfor various operating systems; however, officially distributed binaries are provided for the following: Firefox 1.0 was released forWindows 95, as well asWindows NT 4.0or later. Some users reported the 1.x builds were operable (but not installable) onWindows NT 3.51.[184] The version 42.0 release includes the firstx64build. It requiredWindows 7orServer 2008 R2.[185]Starting from version 49.0, Firefox for Windows requires and uses theSSE2instruction set. In September 2013, Mozilla released aMetro-style versionof Firefox, optimized fortouchscreenuse, on the "Aurora" release channel. However, on March 14, 2014, Mozilla cancelled the project because of a lack of user adoption.[186][187][188] In March 2017, Firefox 52 ESR, the last version of the browser forWindows XPandWindows Vista, was released.[189]Support for Firefox 52 ESR ended in June 2018.[190] Traditionally, installing the Windows version of Firefox entails visiting the Firefox website and downloading an installer package, depending on the desired localization and system architecture. In November 2021, Mozilla made Firefox available onMicrosoft Store. The Store-distributed package does not interfere with the traditional installation.[191][192] The last version of Firefox for Windows 7 and 8 is Firefox 115 ESR, which was released in July 2023.[193]Itsend-of-lifewas initially planned to be in October 2024,[194]however in July 2024, a Mozilla employee announced in a comment on Reddit that the company consider extending the support beyond the initial date, the duration of that extension being yet to be defined.[citation needed]In September 2024, the extension was announced for an initial period of six months.[195]In the release calendar page, a note states that Mozilla will re-evaluate the situation in early 2025 to see if another extension will be needed or not and statute about 115 ESR end-of-life then.[196]This extension has been renewed one more time, on February 18, 2025, for 6 additional months, which lead the end-of-life date on par with the 128 ESR branch, in September 2025.[197] The first official release (Firefox version 1.0) supportedmacOS(then called Mac OS X) on thePowerPCarchitecture. Mac OS X builds for theIA-32architecture became available via auniversal binarywhich debuted with Firefox 1.5.0.2 in 2006. Starting with version 4.0, Firefox was released for the x64 architecture to which macOS had migrated.[198]Version 4.0 also dropped support for PowerPC architecture, although other projects continued development of a PowerPC version of Firefox.[199] Firefox was originally released for Mac OS X 10.0 and higher.[200]The minimum OS then increased to Mac OS X 10.2 in Firefox 1.5 and 10.4 in Firefox 3.[201][202]Firefox 4 dropped support for Mac OS X 10.4 and PowerPC Macs, and Firefox 17 dropped support for Mac OS X 10.5 entirely.[203][204]The system requirements were left unchanged until 2016, when Firefox 49 dropped support for Mac OS X 10.6–10.8.[205][206]Mozilla ended support for OS X 10.9–10.11 in Firefox 79, with those users being supported on the Firefox 78 ESR branch until November 2021.[207][208][209]Most recently, Mozilla ended support formacOS 10.12–10.14in Firefox 116, with those users being supported on the Firefox 115 ESR branch until late 2024. In September 2024 however, an extension was announced for the 115 ESR branch for an initial period of six months.[195]This extension has been renewed one more time, leading the end-of-life date to September 2025[197] Since its inception, Firefox for Linux supported the 32-bit memory architecture of the IA-32 instruction set. 64-bit builds were introduced in the 4.0 release.[198]The 46.0 release replacedGTK2.18 with 3.4 as a system requirement on Linux and other systems runningX.Org.[210]Starting with 53.0, the 32-bit builds require theSSE2instruction set.[211] Firefox for mobile, code-named "Fennec", was first released forMaemoin January 2010 with version 1.0[212]and forAndroidin March 2011 with version 4.0.[213]Support for Maemo was discontinued after version 7, released in September 2011.[214]Fennec had a user interface optimized for phones and tablets. It included the Awesome Bar, tabbed browsing, add-on support, a password manager, location-aware browsing, and the ability to synchronize with the user's other devices with Mozilla Firefox usingFirefox Sync.[215]At the end of its existence, it had a market share of 0.5% on Android.[216] In August 2020, Mozilla launched a new version of its Firefox for Android app, named Firefox Daylight to the public[217]and codenamedFenix,[218]after a little over a year of testing.[219]It boasted higher speeds with its newGeckoViewengine, which is described as being "the only independentweb engine browseravailable onAndroid". It also added Enhanced Tracking Protection 2.0, a feature that blocks many knowntrackerson the Internet.[220]It also added the ability to place the address bar on the bottom, and a new Collections feature.[217]However, it was criticized for only having nineAdd-onsat launch, and missing certain features.[221][222][223]In response, Mozilla stated that they will allow more Add-ons with time.[224] Mozilla initially refused to port Firefox to iOS, due to the restrictions Apple imposed on third-party iOS browsers. Instead of releasing a full version of the Firefox browser, Mozilla released Firefox Home, a companion app for the iPhone and iPod Touch based on theFirefox Synctechnology, which allowed users to access their Firefox browsing history, bookmarks, and recent tabs. It also included Firefox's "Awesomebar" location bar. Firefox Home was not a web browser, the application launched web pages in either an embedded viewer for that one page, or by opening the page in the Safari app.[233][234]Mozilla pulled Firefox Home from theApp Storein September 2012, stating it would focus its resources on other projects. The company subsequently released thesource codeof Firefox Home's underlying synchronization software.[235] In April 2013, then-Mozilla CEOGary Kovacssaid that Firefox would not come to iOS if Apple required the use of theWebKitlayout engine to do so. One reason given by Mozilla was that prior to iOS 8, Apple had supplied third-party browsers with an inferior version of their JavaScript engine which hobbled their performance, making it impossible to match Safari's JavaScript performance on the iOS platform.[236]Apple later opened their "Nitro" JavaScript engine to third-party browsers.[237]In 2015, Mozilla announced it was moving forward with Firefox for iOS, with a preview release made available in New Zealand in September of that year.[238][239][240]It was fully released in November later that year.[241]It is the first Firefox-branded browser not to use theGeckolayout engineas is used in Firefox fordesktopandmobile. Apple's policies require all iOS apps that browse the web to use the built-inWebKitrendering framework and WebKit JavaScript, so using Gecko is not possible.[242][243]UnlikeFirefox on Android, Firefox for iOS does not support browser add-ons. In November 2016, Firefox released a new iOS app titledFirefox Focus, a private web browser.[244] Firefox Reality was released foraugmented realityandvirtual realityheadsets in September 2018.[245]It supports traditional web-browsing through 2D windows and immersive VR pages throughWeb VR. Firefox Reality is available onHTC Vive,Oculus,Google DaydreamandMicrosoft Hololensheadsets. In February 2022 Mozilla announced thatIgaliatook over stewardship of this project under the new name of Wolvic.[246] Firefox has also been ported toFreeBSD,[247]NetBSD,[248]OpenBSD,[249]OpenIndiana,[250]OS/2,[251]ArcaOS,[252]SkyOS,RISC OS[253]andBeOS/Haiku,[254][255][256][257]and an unofficial rebranded version calledTimberwolfhas been available forAmigaOS 4.[258] The Firefox port for OpenBSD is maintained by Landry Breuil since 2010. Firefox is regularly built for the current branch of the operating system, the latest versions are packaged for each release and remain frozen until the next release. In 2017, Landry began hosting packages of newer Firefox versions for OpenBSD releases from 6.0 onwards, making them available to installations without the ports system.[259] TheSolaris10 port of Firefox (includingOpenSolaris) was maintained by the Oracle Solaris Desktop Beijing Team,[260][261]until March 2018 when the team was disbanded. There was also an unofficial port ofFirefox 3.6.x toIBM AIX[262][263]and of v1.7.x toUnixWare.[264] In March 2011, Mozilla presented plans to switch to therapid release model, a faster 16-weekdevelopment cycle, similar toGoogle Chrome.Ars Technicanoted that this new cycle entailed "significant technical and operational challenges" for Mozilla (notably preserving third-partyadd-oncompatibility), but that it would help accelerate Firefox's adoption of new web standards, feature, and performance improvements.[265][266]This plan was implemented in April 2011.[267]The release process was split into four "channels", with major releases trickling down to the next channel every six to eight weeks. For example, the Nightly channel would feature a preliminary unstable version of Firefox 6, which would move to the experimental "Aurora" channel after preliminary testing, then to the more stable "beta" channel, before finally reaching the public release channel, with each stage taking around six weeks.[268][265][269]For corporations, Mozilla introduced an Extended Support Release (ESR) channel, with new versions released every 30 weeks (and supported for 12 more weeks after a new ESR version is released), though Mozilla warned that it would be less secure than the release channel, since security patches would only bebackportedfor high-impact vulnerabilities.[270][271] In 2017, Mozilla abandoned the Aurora channel, which saw low uptake, andrebasedFirefox Developer Edition onto the beta channel.[272]Mozilla usesA/B testing[273]and a staged rollout mechanism for the release channel, where updates are first presented to a small fraction of users, with Mozilla monitoring its telemetry for increased crashes or other issues before the update is made available to all users.[268]In 2020, Firefox moved to a four-week release cycle, to catch up with Chrome in support for new web features.[274][275]Chrome switched to a four-week cycle a year later.[276] Firefoxsource codeisfree software, with most of it being released under theMozilla Public License(MPL) version 2.0.[11]This license permits anyone to view, modify, or redistribute the source code. As a result, several publicly released applications have been built from it, including Firefox's predecessorNetscape,[277]the customizablePale Moon, and the privacy focusedTor Browser.[278] In the past, Firefox was licensed solely under the MPL, then version 1.1,[279]which theFree Software Foundationcriticized for beingweak copyleft, as the license permitted, in limited ways, proprietaryderivative works. Additionally, code only licensed under MPL 1.1 could not legally be linked with code under theGPL.[280][281]To address these concerns, Mozilla re-licensed most of Firefox under thetri-licensescheme of MPL 1.1, GPL 2.0, orLGPL2.1. Since the re-licensing, developers were free to choose the license under which they received most of the code, to suit their intended use: GPL or LGPL linking and derivative works when one of those licenses is chosen, or MPL use (including the possibility of proprietary derivative works) if they chose the MPL.[279]However, on January 3, 2012, Mozilla released the GPL-compatible MPL 2.0,[282]and with the release of Firefox 13 on June 5, 2012, Mozilla used it to replace the tri-licensing scheme.[283] The name "Mozilla Firefox" is aregistered trademarkof Mozilla; along with the official Firefox logo, it may only be used under certain terms and conditions. Anyone may redistribute the official binaries in unmodified form and use the Firefox name and branding for such distribution, but restrictions are placed on distributions which modify the underlying source code.[284]The name "Firefox" derives from a nickname of thered panda.[35]Mozilla celebrated Red Pandas.[285] Mozilla has placed the Firefox logo files under open-source licenses,[286][287]but its trademark guidelines do not allow displaying altered[288]or similar logos[289]in contexts where trademark law applies.[290] There has been some controversy over the Mozilla Foundation's intentions in stopping certain open-source distributions from using the "Firefox" trademark.[12]Open-source browsers "enable greater choice and innovation in the market rather than aiming for mass-market domination."[291]Mozilla Foundation ChairpersonMitchell Bakerexplained in an interview in 2007 that distributions could freely use the Firefox trademark if they did not modify source code, and that the Mozilla Foundation's only concern was with users getting a consistent experience when they used "Firefox".[292] To allow distributions of the codewithoutusing the official branding, the Firefoxbuild systemcontains a "branding switch". This switch, often used for alphas ("Auroras") of future Firefox versions, allows the code to be compiled without the official logo and name and can allow a derivative work unencumbered by restrictions on the Firefox trademark to be produced. In the unbranded build, the trademarked logo and name are replaced with a freely distributable generic globe logo and the name of the release series from which the modified version was derived.[citation needed] Distributing modified versions of Firefox under the "Firefox" name required explicit approval from Mozilla for the changes made to the underlying code, and required the use ofallof the official branding. For example, it was not permissible to use the name "Firefox" without also using the official logo. When theDebianproject decided to stop using the official Firefox logo in 2006 (because Mozilla's copyright restrictions at the time were incompatible withDebian's guidelines), they were told by a representative of the Mozilla Foundation that this was not acceptable and was asked either to comply with the published trademark guidelines or cease using the "Firefox" name in their distribution.[293]Debian switched to branding their modified version of Firefox "Iceweasel" (but in 2016 switched back to Firefox), along with other Mozilla software.GNU IceCatis another derived version of Firefox distributed by theGNU Project, which maintains its separate branding.[294] The Firefox icon is a trademark used to designate the official Mozilla build of the Firefox software and builds of official distribution partners.[295]For this reason, software distributors who distribute modified versions of Firefox do not use the icon.[290] Early Firebird and Phoenix releases of Firefox were considered to have reasonable visual designs but fell short when compared to many other professional software packages. In October 2003, professional interface designer Steven Garrity authored an article covering everything he considered to be wrong with Mozilla's visual identity.[296] Shortly afterwards, the Mozilla Foundation invited Garrity to head up the new visual identity team. The release of Firefox 0.8 in February 2004 saw the introduction of the new branding efforts. Included were new icon designs by silverorange, a group of web developers with a long-standing relationship with Mozilla. The final renderings are byJon Hicks, who had worked onCamino.[297][298]The logo was later revised and updated, fixing several flaws found when it was enlarged.[299]The animal shown in the logo is a stylized fox, although "firefox" is usually a common name for thered panda. The panda, according to Hicks, "didn't really conjure up the right imagery" and was not widely known.[298] In June 2019, Mozilla unveiled a revised Firefox logo, which was officially implemented on version 70. The new logo is part of an effort to build a brand system around Firefox and its complementary apps and services, which are now being promoted as a suite under the Firefox brand. Firefox was adopted rapidly, with 100 million downloads in its first year of availability.[302]This was followed by a series of aggressive marketing campaigns starting in 2004 with a series of eventsBlake Rossand Asa Dotzler called "marketing weeks".[303] Firefox continued to heavily market itself by releasing a marketing portal dubbed "Spread Firefox" (SFX) on September 12, 2004.[304]It debuted along with the Firefox Preview Release, creating a centralized space for the discussion of various marketing techniques. The release of theirmanifestostated that "the Mozilla project is a global community of people who believe that openness, innovation and opportunity are key to the continued health of the Internet."[291]A two-page ad in the edition of December 16 ofThe New York Times, placed by Mozilla Foundation in coordination with Spread Firefox, featured the names of the thousands of people worldwide who contributed to the Mozilla Foundation's fundraising campaign to support the launch of the Firefox 1.0 web browser.[305]SFX portal enhanced the "Get Firefox" button program, giving users "referrer points" as an incentive. The site lists the top 250 referrers. From time to time, the SFX team or SFX members launch marketing events organized at the Spread Firefox website. As a part of the Spread Firefox campaign, there was an attempt to break the world download record with the release of Firefox 3.[306]This resulted in an official certifiedGuinness world record, with over eight million downloads.[307]In February 2011, Mozilla announced that it would be retiring Spread Firefox (SFX). Three months later, in May 2011, Mozilla officially closed Spread Firefox. Mozilla wrote that "there are currently plans to create a new iteration of this website [Spread Firefox] at a later date."[308] In celebration of the third anniversary of the founding of theMozilla Foundation, the "World Firefox Day" campaign was established on July 15, 2006,[309][310]and ran until September 15, 2006.[311]Participants registered themselves and a friend on the website for nomination to have their names displayed on the Firefox Friends Wall, a digital wall that was displayed at the headquarters of the Mozilla Foundation. The Firefox community has also engaged in the promotion of their web browser. In 2006, some of Firefox's contributors fromOregon State Universitymade acrop circleof the Firefox logo in anoatfield nearAmity, Oregon, near the intersection of Lafayette Highway and Walnut Hill Road.[312]After Firefox reached 500 million downloads on February 21, 2008, the Firefox community celebrated by visitingFreericeto earn 500 million grains of rice.[313] Other initiatives included Live Chat – a service Mozilla launched in 2007 that allowed users to seek technical support from volunteers.[314]The service was later retired.[315] To promote the launch of Firefox Quantum in November 2017, Mozilla partnered withReggie Wattsto produce a series of TV ads and social media content.[316] In December 2005,Internet Weekran an article in which many readers reported high memory usage in Firefox 1.5.[317]Mozilla developers said that the higher memory use of Firefox 1.5 was at least partially due to the new fast backwards-and-forwards (FastBack) feature.[318]Other known causes of memory problems were malfunctioning extensions such asGoogle Toolbarand some older versions ofAdBlock,[319]or plug-ins, such as older versions of Adobe Acrobat Reader.[320]WhenPC Magazinein 2006 compared memory usage of Firefox 2,Opera 9, andInternet Explorer 7, they found that Firefox used approximately as much memory as each of the other two browsers.[321] In 2006,Softpedianoted that Firefox 1.5 took longer to start up than other browsers,[322]which was confirmed by furtherspeed tests.[323] Internet Explorer 6 launched more swiftly than Firefox 1.5 onWindows XPsince many of its components were built into the OS and loaded during system startup. As a workaround for the issue, a preloader application was created that loaded components of Firefox on startup, similar to Internet Explorer.[324]AWindows Vistafeature calledSuperFetchperforms a similar task of preloading Firefox if it is used often enough.[citation needed] Tests performed byPC Worldand Zimbra in 2006 indicated that Firefox 2 used less memory than Internet Explorer 7.[325][326]Firefox 3 used less memory than Internet Explorer 7, Opera 9.50 Beta,Safari3.1 Beta, and Firefox 2 in tests performed by Mozilla, CyberNet, and The Browser World.[327][328][329]In mid-2009, BetaNews benchmarked Firefox 3.5 and declared that it performed "nearly ten times better on XP than Microsoft Internet Explorer 7".[330] In January 2010, Lifehacker compared the performance of Firefox 3.5, Firefox 3.6, Google Chrome 4 (stable and Dev versions), Safari 4, and Opera (10.1 stable and 10.5 pre-alpha versions). Lifehacker timed how long browsers took to start and reach a page (both right after boot-up and after running at least once already), timed how long browsers took to load nine tabs at once, tested JavaScript speeds using Mozilla's Dromaeo online suite (which implements Apple'sSunSpiderand Google's V8 tests) and measured memory usage using Windows 7's process manager. They concluded that Firefox 3.5 and 3.6 were the fifth- and sixth-fastest browsers, respectively, on startup, 3.5 was third- and 3.6 was sixth-fastest to load nine tabs at once, 3.5 was sixth- and 3.6 was fifth-fastest on the JavaScript tests. They also concluded that Firefox 3.6 was the most efficient with memory usage followed by Firefox 3.5.[331] In February 2012,Tom's Hardwareperformance tested Chrome 17, Firefox 10,Internet Explorer 9, Opera 11.61, and Safari 5.1.2 on Windows 7.Tom's Hardwaresummarized their tests into four categories: Performance, Efficiency, Reliability, and Conformance. In the performance category they testedHTML5,Java,JavaScript,DOM,CSS 3,Flash,Silverlight, andWebGL(WebGL 2is current as of version 51; and Java and Silverlight stop working as of version 52)—they also tested startup time and page load time. The performance tests showed that Firefox was either "acceptable" or "strong" in most categories, winning three categories (HTML5, HTML5hardware acceleration, and Java) only finishing "weak" in CSS performance. In the efficiency tests,Tom's Hardwaretested memory usage and management. With this category, it determined that Firefox was only "acceptable" at performing light memory usage, while it was "strong" at performing heavy memory usage. In the reliability category, Firefox performed a "strong" amount of proper page loads. For the final category, conformance, it was determined that Firefox had "strong" conformance for JavaScript and HTML5. So in conclusion,Tom's Hardwaredetermined that Firefox was the best browser for Windows 7 OS, but that it only narrowly beat Google Chrome.[332] In June 2013,Tom's Hardwareagain performance tested Firefox 22, Chrome 27, Opera 12, andInternet Explorer 10. They found that Firefox slightly edged out the other browsers in their "performance" index, which examined wait times, JavaScript execution speed, HTML5/CSS3 rendering, and hardware acceleration performance. Firefox also scored the highest on the "non-performance" index, which measured memory efficiency, reliability, security, and standards conformance, finishing ahead of Chrome, the runner-up.Tom's Hardwareconcluded by declaring Firefox the "sound" winner of the performance benchmarks.[333] In January 2014, a benchmark testing the memory usage of Firefox 29, Google Chrome 34, andInternet Explorer 11indicated that Firefox used the least memory when a substantial number of tabs were open.[334] In benchmark testing in early 2015 on a "high-end" Windows machine, comparingMicrosoft Edge [Legacy], Internet Explorer, Firefox, Chrome, and Opera, Firefox achieved the highest score on three of the seven tests. Four different JavaScript performance tests gave conflicting results. Firefox surpassed all other browsers on thePeacekeeper benchmark, but was behind the Microsoft products when tested with SunSpider. Measured with Mozilla's Kraken, it came second place to Chrome, while on Google'sOctanechallenge it took third behind Chrome and Opera. Firefox took the lead with WebXPRT, which runs several typical HTML5 and JavaScript tasks. Firefox, Chrome, and Opera all achieved the highest possible score on the Oort Online test, measuring WebGL rendering speed (WebGL 2 is now current). In terms of HTML5 compatibility testing, Firefox was ranked in the middle of the group.[335] A similar set of benchmark tests in 2016 showed Firefox's JavaScript performance on Kraken and the newerJetstreamtests trailing slightly behind all other tested browsers except Internet Explorer (IE), which performed relatively poorly. On Octane, Firefox came ahead of IE and Safari, but again slightly behind the rest, includingVivaldiand Microsoft Edge [Legacy]. Edge [Legacy] took overall first place on the Jetstream and Octane benchmarks.[336] As of the adoption of Firefox 57 and Mozilla'sQuantum projectentering production browsers in November 2017, Firefox was tested to be faster than Chrome in independent JavaScript tests, and demonstrated to use less memory with many browser tabs opened.[337][338]TechRadarrated it as the fastest web browser in a May 2019 report.[339] Downloads have continued at an increasing rate since Firefox 1.0 was released, and as of 31 July 2009[update]Firefox had already been downloaded over one billion times.[342]This number does not include downloads using software updates or those from third-party websites.[343]They do not represent a user count, as one download may be installed on many machines, one person may download the software multiple times, or the software may be obtained from a third-party.[citation needed] In July 2010,IBMasked all employees (about 400,000) to use Firefox as their default browser.[344] Firefox was the second-most used web browser until November 2011, when Google Chrome surpassed it.[345]According to Mozilla, Firefox had more than 450 million users as of October 2012[update].[346][347] In October 2024, Firefox was the fourth-most widely used desktop browser, and it was the fourth-most popular with 2.95% of worldwideusage share of web browsersacross all platforms.[348] According to the Firefox Public Data report by Mozilla, the active monthly count of Desktop clients has decreased from around 310 million in 2017 to 200 million in 2023.[350]From Oct 2020, the desktop market share of Firefox started to decline in countries where it used to be the most popular. In Eritrea, it dropped from 50% in Oct 2020 to 9.32% in Sept 2021. In Cuba, it dropped from 54.36% in Sept 2020 to 38.42% in Sept 2021.[351][352] The UK[353]and US[354]governments both follow the 2% rule. This states that only browsers with more than 2% market share among visitors of their websites will be supported. There are concerns that support for Firefox will be dropped because as of December 29, 2023, the browser market share among US government website visitors is 2.2%.[355]
https://en.wikipedia.org/wiki/Firefox
Non-formal learningincludes various structuredlearningsituations which do not either have the level ofcurriculum,institutionalization,accreditationorcertificationassociated with 'formal learning', but have more structure than that associated with 'informal learning', which typically take place naturally and spontaneously as part of other activities. These form the three styles of learning recognised and supported by theOECD.[1] Examples of non-formal learning include swimming sessions for toddlers, community-based sports programs, and programs developed by organisations such as theBoy Scouts, theGirl Guides, community or non-creditadult educationcourses, sports or fitness programs, professional conference styleseminars, and continuing professional development.[2]The learner's objectives may be to increase skills and knowledge, as well as to experience the emotional rewards associated with increased love for a subject or increased passion for learning.[3] The debate over the relative value of formal and informal learning has existed for a number of years. Traditionally formal learning takes place in aschooloruniversityand has a greater value placed upon it than informal learning, such as learning within the workplace. This concept of formal learning being the socio-cultural accepted norm for learning was first challenged by Scribner and Cole[4]in 1973, who claimed most things in life are better learnt through informal processes, citinglanguage learningas an example. Moreover, anthropologists noted that complex learning still takes place within indigenous communities that had no formal educational institutions.[5] It is the acquisition of this knowledge or learning which occurs in everyday life that has not been fully valued or understood. This led to the declaration by theOECDeducational ministers of the "life-long learningfor all"[6]strategy in 1996. This includes 23 countries from five continents, who have sought to clarify and validate all forms of learning including formal, non-formal and informal. This has been in conjunction with theEuropean Unionwhich has also developed policies for life-long learning which focus strongly on the need to identify, assess and certify non-formal and informal learning, particularly in the workplace.[7] [citation needed] Countries involved in recognition of non-formal learning (OECD 2010) Although all definitions can be contested (see below) this article shall refer to theEuropean Centre for the Development of Professional Training(Cedefop) 2001 communication on 'lifelong learning: formal, non-formal and informal learning' as the guideline for the differing definitions. Formal learning: learning typically provided by an education or training institution, structured (in terms of learning objectives, learning time or learning support) and leading to certification. Formal learning is intentional from the learner's perspective. (Cedefop 2001)[8] Informal learning: learning resulting from daily life activities related to work, family or leisure. It is not structured (in terms of learning objectives, learning time or learning support) and typically does not lead to certification. Informal learning may be intentional but in most cases it is not-intentional (or "incidental"/random). (Cedefop 2001)[8] UNESCO focuses on the flexibility of non formal education and how it allows for more personalised learning. This type of education is open to any personality, age, origin, and irrespective of their personal interest.[9] Non-formal learning: see definition above. If there is no clear distinction between formal and in-formal learning where is the room for non-formal learning. It is a contested issue with numerous definitions given. The following are some the competing theories. "It is difficult to make a clear distinction between formal and informal learning as there is often a crossover between the two."(McGivney, 1999, p1).[10] Similarly, Hodkinson et al. (2003), conclude after a significant literature analysis on the topics of formal, informal, and non-formal learning, that "the terms informal and non-formal appeared interchangeable, each being primarily defined in opposition to the dominant formal education system, and the largely individualist and acquisitional conceptualisations of learning developed in relation to such educational contexts." (Hodkinson et al., 2003, p. 314)[11]Moreover, he states that "It is important not to see informal and formal attributes as somehow separate, waiting to be integrated. This is the dominant view in the literature, and it is mistaken. Thus, the challenge is not to, somehow, combine informal and formal learning, for informal and formal attributes are present and inter-related, whether we will it so or not. The challenge is to recognise and identify them, and understand the implications. For this reason,the concept of non-formal learning, at least when seen as a middle state between formal and informal, is redundant." (p. 314) Eraut's[12]classification of learning into formal and non-formal: This removes informal learning from the equation and states all learning outside of formal learning is non-formal. Eraut equates informal with connotations of dress, language or behaviour that have no relation to learning. Eraut defines formal learning as taking place within a learning framework; within a classroom or learning institution, with a designated teacher or trainer; the award of a qualification or credit; the external specification of outcomes. Any learning that occurs outside of these parameters is non-formal. (Ined 2002)[13] The EC (2001) Communication on Lifelong Learning: formal, non-formal and informal learning: The EU places non-formal learning in between formal and informal learning (see above). This has learning both in a formal setting with a learning framework and as an organised event but within a qualification. "Non-formal learning: learning that is not provided by an education or training institution and typically does not lead to certification. It is, however, structured (in terms of learning objectives, learning time or learning support). Non-formal learning is intentional from the learner's perspective." (Cedefop 2001)[8] Livingstone's[14]adults formal and informal education, non-formal and informal learning: This focuses on the idea of adult non-formal education. This new mode, 'informal education' is when teachers or mentors guide learners without reference to structured learning outcomes. This informal education learning is gaining knowledge without an imposed framework, such as learning new job skills. (Infed, 2002)[13] Billett[15](2001): there is no such thing as informal learning: Billett's definition states there is no such thing as non-formal and informal learning. He states all human activity is learning, and that everything people do involves a process of learning. "all learning takes place within social organisations or communities that have formalised structures." Moreover, he states most learning in life takes place outside of formal education.(Ined 2002)[13] The council of Europe puts the distinction in terms of willingness and the systems on which its taking place. Non formal learning takes place outside learning institutions while informal is a part of the formal systems.[16] Recently, manyinternational organizationsandUNESCOMember States have emphasized the importance of learning that takes place outside of formal learning settings. This emphasis has led UNESCO, through itsInstitute of Lifelong Learning(UIL), to adopt international guidelines for the Recognition, Validation and Accreditation of the Outcomes of Non-formal and Informal Learning in 2012.[17]The emphasis has also led to an increasing number of policies and programmes in many Member States, and a gradual shift from pilots to large-scale systems such as those in Portugal, France, Australia, Mauritius and South Africa.[18] Cedefophas created European guidelines to provide validation to a broad range of learning experiences, thereby aiding transparency and comparability across its national borders. The broad framework for achieving this certification across both non-formal and informal learning is outlined in the Cedefop European guidelines for validating non-formal and informal learning; Routes from learning to certification.[19] There are different approaches to validation between OCED and EU countries, with countries adopting different measures. The EU, as noted above, through the Cedefop-released European guidelines for validating non-formal and informal learning in 2009 to standardise validation throughout the EU. Within the OCED countries, the picture is more mixed. Countries with the existence of recognition for non-formal and informal learning (Feutrie, 2007)[20] Non-formal education (NFE) is popular on a worldwide scale in both 'western' and 'developing countries'. Non-formal education can form a matrix with formal and non-formal education, as non-formal education can mean any form of systematic learning conducted outside the formal setting. Many courses in relation to non-formal education have been introduced in several universities in western and developing countries. The UNESCO institute of education conducted a seminar on non-formaleducation in Morocco. The association for development of education in Africa (ADEA) launched many programmes in non-formal education in at least 15 countries of Sub-Saharan Africa. In 2001 World Bank conducted an international seminar on basic education in non-formal programmes. In addition to this the World Bank was advised to extend its services to adult and non-formal education. A report on professional education, Making Learning Visible: the identification, assessment and recognition of non-formal learning in Europe, defines non-formal learning as semi structured, consisting of planned and explicit approaches to learning introduced into work organisations and elsewhere, not recognised within the formal education and training system.[21] Research by Dr Marnee Shay, a senior lecturer in University of Queensland School of Education indicate that there are nearly 10 times more Indigenous students in flexible schools than it would be expected from numbers in the general population.[22] Several classifications of non-formal education have been proposed.[23][24]Willems and Andersson[25]classify non-formal education according to two dimensions: (1) "NFE in relation to formal and informal learning (Substitute-Complement)" and (2) "Main learning content of NFE (Competencies-Values)". Based on these two dimensions, they describe four types of non-formal education. The goal of their framework is to better understand the various public governance challenges and structures that very different types of non-formal education have. Similarly, Shrestha[26]and colleagues focus on the role of NFE in comparison to formal education. Hoppers[27]proposes a three-fold classification, also in comparison to formal education: "A. Supplementary provisions", "B. Compensatory provisions", and "C. Alternative provisions". Rogers[28]pinpoints to the changing role of NFE over the last five decades and makes a distinction between a first and a second generation NFE. Community work, which is particularly widespread in Scotland, fosters people's commitment to their neighbours and encourages participation in and development of local democratic forms of organisation. Youth work which focuses on making people more active in the society. Social work which helps young people in homes to develop ways to deal with complex situations like fostering fruitful relationships between parents and children, bringing different groups of career together, etc... In France and Italy animation in a particular form is a kind of non-formal education. It uses theatre and acting as means of self-expression with different community groups for children and people with special needs. This type of non-formal education helps in ensuring active participation and teaches people to manage the community in which they live. Youth and community organisations young people have the opportunity to discover, analyse and understand values and their implications and build a set of values to guide their lives. They run work camps and meetings, recruit volunteers, administer bank accounts, give counselling etc. to work toward social change.[29] Education plays an important role in development. Out of school programmes are important to provide adaptable learning opportunities and new skills and knowledge to a large percentage of people who are beyond the reach of formal education. Non-formal education began to gain popularity in the late 1960s and early 1970s. Today, non-formal education is seen as a concept of recurrent and lifelong learning. Non-formal education is popular among the adults specially the women as it increases women's participation in both private and public activities, i.e. in house hold decision making and as active citizens in the community affairs and national development. These literacy programmes have a dramatic impact on women's self-esteem because they unleash their potential in economic, social, cultural and political spheres. According to UNESCO (2010), non-formal education helps to ensures equal access to education, eradicate illiteracy among women and improve women's access to professional training, science, technology and continuing education. It also encourages the development of non-discriminatory education and training. The effectiveness of such literacy and non-formal education programmes are bolstered by family, community and parental involvement.[citation needed]This is why the United NationsSustainable Development Goal 4advocates for a diversification of learning opportunities and the usage of a wide range of education and training modalities in recognition of the importance of non-formal education. Non-formal education is beneficial in a number of ways. There are activities that encourage young people to choose their own programme and projects that are important because they offer the youth the flexibility and freedom to explore their emerging interests. When the youth can choose the activities in which they can participate, they have opportunities to develop several skills like decision making skills. A distinction can be made between "participant functionality" and "societal functionality" of non-formal education.[30]Participant functionality refers to the aimed advantages for the individual participants in non-formal education, while societal functionality refers to the benefits non-formal education has on society in general. Non-formal learning has experiential learning activities that foster the development of skills and knowledge. This helps in building the confidence and abilities among the youth of today. It also helps in development of personal relationships not only among the youth but also among the adults. It helps in developing interpersonal skills among the young people as they learn to interact with peers outside the class and with adults in the community.[31] Formal education system are inadequate to effectively meet the needs of the individual and the society. The need to offer more and better education at all levels, to a growing number of people, particularly in developing countries, the scant success of current formal education systems to meet all such demands, has shown the need to develop alternatives to learning. The rigid structure of formal schools, mainly because of rules and regulations than concentrating on the real need of the students, offering curriculum that leans away from the individual and from society, far more concerned with performing programmes than reaching useful objectives. This called for non-formal education which starting from the basic need of the students, is concerned with the establishment of strategies that are compatible with reality.[32] The recognition of non-formal learning through credentials, diplomas, certificates, and awards is sorely lacking,[according to whom?]which can negatively affect employment opportunities which require specific certification or degrees.[33] Non-formal learning, due to its 'unofficial' and ad-hoc nature, may also not have a specific curriculum with a clear structure and direction which also implies a lack of accountability due to an over-reliance on self-assessment. Moreover, more often than not, the organizations or individuals providing non-formal learning tend to be teachers who were not professionally trained, thus meaning they possess less qualities than professionally trained teachers, which will negatively affect the students.[34]
https://en.wikipedia.org/wiki/Nonformal_learning
Incryptography,cryptographic hash functionscan be divided into two main categories. In the first category are those functions whose designs are based on mathematical problems, and whose security thus follows from rigorous mathematical proofs,complexity theoryandformal reduction. These functions are calledprovably secure cryptographic hash functions. To construct these is very difficult, and few examples have been introduced. Their practical use is limited. In the second category are functions which are not based on mathematical problems, but on an ad-hoc constructions, in which the bits of the message are mixed to produce the hash. These are then believed to be hard to break, but no formal proof is given. Almost all hash functions in widespread use reside in this category. Some of these functions are already broken, and are no longer in use.SeeHash function security summary. Generally, thebasicsecurity ofcryptographic hash functionscan be seen from different angles: pre-image resistance, second pre-image resistance, collision resistance, and pseudo-randomness. The basic question is the meaning ofhard. There are two approaches to answer this question. First is the intuitive/practical approach: "hardmeans that it is almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important." The second approach is theoretical and is based on thecomputational complexity theory: if problemAis hard, then there exists a formalsecurity reductionfrom a problem which is widely considered unsolvable inpolynomial time, such asinteger factorizationor thediscrete logarithmproblem. However, non-existence of a polynomial time algorithm does not automatically ensure that the system is secure. The difficulty of a problem also depends on its size. For example,RSA public-key cryptography(which relies on the difficulty ofinteger factorization) is considered secure only with keys that are at least 2048 bits long, whereas keys for theElGamal cryptosystem(which relies on the difficulty of thediscrete logarithmproblem) are commonly in the range of 256–512 bits. If the set of inputs to the hash is relatively small or is ordered by likelihood in some way, then a brute force search may be practical, regardless of theoretical security. The likelihood of recovering the preimage depends on the input set size and the speed or cost of computing the hash function. A common example is the use of hashes to storepasswordvalidation data. Rather than store the plaintext of user passwords, an access control system typically stores a hash of the password. When a person requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, then the thief will only have the hash values, not the passwords. However, most users choose passwords in predictable ways, and passwords are often short enough so that all possible combinations can be tested if fast hashes are used.[1]Special hashes calledkey derivation functionshave been created to slow searches.SeePassword cracking. Most hash functions are built on an ad-hoc basis, where the bits of the message are nicely mixed to produce the hash. Variousbitwise operations(e.g. rotations),modular additions, andcompression functionsare used in iterative mode to ensure high complexity and pseudo-randomness of the output. In this way, the security is very hard to prove and the proof is usually not done. Only a few years ago[when?], one of the most popular hash functions,SHA-1, was shown to be less secure than its length suggested: collisions could be found in only 251[2]tests, rather than the brute-force number of 280. In other words, most of the hash functions in use nowadays are not provably collision-resistant. These hashes are not based on purely mathematical functions. This approach results generally in more effective hashing functions, but with the risk that a weakness of such a function will be eventually used to find collisions. One famous case isMD5. In this approach, the security of a hash function is based on some hard mathematical problem, and it is proved that finding collisions of the hash function is as hard as breaking the underlying problem. This gives a somewhat stronger notion of security than just relying on complex mixing of bits as in the classical approach. A cryptographic hash function hasprovable security against collision attacksif finding collisions is provablypolynomial-time reduciblefrom a problemPwhich is supposed to be unsolvable in polynomial time. The function is then called provably secure, or just provable. It means that if finding collisions would be feasible in polynomial time by algorithmA, then one could find and use polynomial time algorithmR(reduction algorithm) that would use algorithmAto solve problemP, which is widely supposed to be unsolvable in polynomial time. That is a contradiction. This means that finding collisions cannot be easier than solvingP. However, this only indicates that finding collisions is difficult insomecases, as not all instances of a computationally hard problem are typically hard. Indeed, very large instances of NP-hard problems are routinely solved, while only the hardest are practically impossible to solve. Examples of problems that are assumed to be not solvable in polynomial time include SWIFFTis an example of a hash function that circumvents these security problems. It can be shown that, for any algorithm that can break SWIFFT with probabilitypwithin an estimated timet, one can find an algorithm that solves theworst-casescenario of a certain difficult mathematical problem within timet′depending ontandp.[citation needed] Lethash(m) =xmmodn, wherenis a hard-to-factor composite number, andxis some prespecified base value. A collisionxm1≡xm2(modn)reveals a multiplem1−m2of themultiplicative orderofxmodulon. This information can be used to factornin polynomial time, assuming certain properties ofx. But the algorithm is quite inefficient because it requires on average 1.5 multiplications modulonper message-bit.
https://en.wikipedia.org/wiki/Security_of_cryptographic_hash_functions
Swarm roboticsis the study of how to design independent systems of robots without centralized control. The emergingswarming behaviorof robotic swarms is created through the interactions between individual robots and the environment.[1]This idea emerged on the field ofartificial swarm intelligence, as well as the studies of insects, ants and other fields in nature, whereswarm behavioroccurs.[2] Relatively simple individual rules can produce a large set of complexswarm behaviors. A key component is the communication between the members of the group that build a system of constant feedback. The swarm behavior involves constant change of individuals in cooperation with others, as well as the behavior of the whole group. The design of swarm robotics systems is guided by swarm intelligence principles, which promote fault tolerance, scalability, and flexibility.[1]Unlike distributed robotic systems in general, swarm robotics emphasizes a large number of robots. While various formulations of swarm intelligence principles exist, one widely recognized set includes: Miniaturization is also key factor in swarm robotics, as the effect of thousands of small robots can maximize the effect of the swarm-intelligent approach to achieve meaningful behavior at swarm-level through a greater number of interactions on an individual level.[5] Compared with individual robots, a swarm can commonly decompose its given missions to their subtasks;[6]a swarm is more robust to partial failure and is more flexible with regard to different missions.[7] The phrase "swarm robotics" was reported to make its first appearance in 1991 according to Google Scholar, but research regarding swarm robotics began to grow in early 2000s. The initial goal of studying swarm robotics was to test whether the concept ofstigmergycould be used as a method for robots to indirectly communication and coordinate with each other.[5] One of the first international projects regarding swarm robotics was the SWARM-BOTS project funded by the European Commission between 2001 and 2005, in which a swarm of up to 20 of robots capable of independently physically connect to each other to form a cooperating system were used to study swarm behaviors such as collective transport, area coverage, and searching for objects. The result was demonstration of self-organized teams of robots that cooperate to solve a complex task, with the robots in the swarm taking different roles over time. This work was then expanded upon through the Swarmanoid project (2006–2010), which extended the ideas and algorithms developed in Swarm-bots to heterogeneous robot swarms composed of three types of robots—flying, climbing, and ground-based—that collaborated to carry out a search and retrieval task.[5] There are many potential applications for swarm robotics.[8]They include tasks that demandminiaturization(nanorobotics,microbotics), like distributed sensing tasks inmicromachineryor the human body. A promising use of swarm robotics is insearch and rescuemissions.[9]Swarms of robots of different sizes could be sent to places that rescue-workers cannot reach safely, to explore the unknown environment and solve complex mazes via onboard sensors.[9]Swarm robotics can also be suited to tasks that demand cheap designs, for instanceminingor agricultural shepherding tasks.[10] Drone swarms are used in target search,drone displays, and delivery. A drone display commonly uses multiple, lighted drones at night for an artistic display or advertising. A delivery drone swarm can carry multiple packages to a single destination at a time and overcome a single drone's payload and battery limitations.[11]A drone swarm may undertake differentflight formationsto reduce overall energy consumption due to drag forces.[12] Drone swarming can also introduce additional control issues connected to human factors and the swarm operator. Examples of this include high cognitive demand and complexity when interacting with multiple drones due to changing attention between different individual drones.[13][14]Communication between operator and swarm is also a central aspect.[15] More controversially, swarms ofmilitary robotscan form an autonomous army. U.S. Naval forces have tested a swarm of autonomous boats that can steer and take offensive actions by themselves. The boats are unmanned and can be fitted with any kind of kit to deter and destroy enemy vessels.[16] During theSyrian Civil War, Russian forces in the region reported attacks on their main air force base in the country by swarms of fixed-wing drones loaded with explosives.[17] Another large set of applications may be solved using swarms ofmicro air vehicles, which are also broadly investigated nowadays. In comparison with the pioneering studies of swarms of flying robots using precisemotion capturesystems in laboratory conditions,[18]current systems such asShooting Starcan control teams of hundreds of micro aerial vehicles in outdoor environment[19]usingGNSSsystems (such as GPS) or even stabilize them using onboardlocalizationsystems[20]where GPS is unavailable.[21][22]Swarms of micro aerial vehicles have been already tested in tasks of autonomous surveillance,[23]plume tracking,[24]and reconnaissance in a compact phalanx.[25]Numerous works on cooperative swarms of unmanned ground and aerial vehicles have been conducted with target applications of cooperative environment monitoring,[26]simultaneous localization and mapping,[27]convoy protection,[28]and moving target localization and tracking.[29] In 2023, University of Washington and Microsoft researchers demonstrated acoustic swarms of tiny robots that create shape-changing smart speakers.[30]These can be used for manipulating acoustic scenes to focus on or mute sounds from a specific region in a room.[31]Here, tiny robots cooperate with each other using sound signals, without any cameras, to navigate cooperatively with centimeter-level accuracy. These swarm devices spread out across a surface to create a distributed and reconfigurable wireless microphone array. They also navigate back to the charging station where they can be automatically recharged.[32] Most efforts have focused on relatively small groups of machines. However, aKilobotswarm consisting of 1,024 individual robots was demonstrated by Harvard in 2014, the largest to date.[33] Another example of miniaturization is the LIBOT Robotic System[34]that involves a low cost robot built for outdoor swarm robotics. The robots are also made with provisions for indoor use via Wi-Fi, since the GPS sensors provide poor communication inside buildings. Another such attempt is the micro robot (Colias),[35]built in the Computer Intelligence Lab at theUniversity of Lincoln, UK. This micro robot is built on a 4 cm circular chassis and is a low-cost and open platform for use in a variety of swarm robotics applications. Additionally, progress has been made in the application of autonomous swarms in the field of manufacturing, known asswarm 3D printing. This is particularly useful for the production of large structures and components, where traditional3D printingis not able to be utilized due to hardware size constraints. Miniaturization and mass mobilization allows the manufacturing system to achievescale invariance, not limited in effective build volume. While in its early stage of development, swarm 3D printing is currently being commercialized by startup companies.[36]
https://en.wikipedia.org/wiki/Swarm_robotics
Inmathematics, anisomorphismis a structure-preservingmappingormorphismbetween twostructuresof the same type that can be reversed by aninverse mapping. Two mathematical structures areisomorphicif an isomorphism exists between them. The word is derived fromAncient Greekἴσος(isos)'equal'andμορφή(morphe)'form, shape'. The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may often be identified. Inmathematical jargon, one says that two objects are the sameup toan isomorphism. A common example where isomorphic structures cannot be identified is when the structures are substructures of a larger one. For example, all subspaces of dimension one of avector spaceare isomorphic and cannot be identified. Anautomorphismis an isomorphism from a structure to itself. An isomorphism between two structures is acanonical isomorphism(acanonical mapthat is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of auniversal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for everyprime numberp, allfieldswithpelements are canonically isomorphic, with a unique isomorphism. Theisomorphism theoremsprovide canonical isomorphisms that are not unique. The termisomorphismis mainly used foralgebraic structuresandcategories. In the case of algebraic structures, mappings are calledhomomorphisms, and a homomorphism is an isomorphismif and only ifit isbijective. In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example: Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea. LetR+{\displaystyle \mathbb {R} ^{+}}be themultiplicative groupofpositive real numbers, and letR{\displaystyle \mathbb {R} }be the additive group of real numbers. Thelogarithm functionlog:R+→R{\displaystyle \log :\mathbb {R} ^{+}\to \mathbb {R} }satisfieslog⁡(xy)=log⁡x+log⁡y{\displaystyle \log(xy)=\log x+\log y}for allx,y∈R+,{\displaystyle x,y\in \mathbb {R} ^{+},}so it is agroup homomorphism. Theexponential functionexp:R→R+{\displaystyle \exp :\mathbb {R} \to \mathbb {R} ^{+}}satisfiesexp⁡(x+y)=(exp⁡x)(exp⁡y){\displaystyle \exp(x+y)=(\exp x)(\exp y)}for allx,y∈R,{\displaystyle x,y\in \mathbb {R} ,}so it too is a homomorphism. The identitieslog⁡exp⁡x=x{\displaystyle \log \exp x=x}andexp⁡log⁡y=y{\displaystyle \exp \log y=y}show thatlog{\displaystyle \log }andexp{\displaystyle \exp }areinversesof each other. Sincelog{\displaystyle \log }is a homomorphism that has an inverse that is also a homomorphism,log{\displaystyle \log }is anisomorphism of groups, i.e.,R+≅R{\displaystyle \mathbb {R} ^{+}\cong \mathbb {R} }via the isomorphismlog⁡x{\displaystyle \log x}. Thelog{\displaystyle \log }function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using arulerand atable of logarithms, or using aslide rulewith a logarithmic scale. Consider the group(Z6,+),{\displaystyle (\mathbb {Z} _{6},+),}the integers from 0 to 5 with additionmodulo6. Also consider the group(Z2×Z3,+),{\displaystyle \left(\mathbb {Z} _{2}\times \mathbb {Z} _{3},+\right),}the ordered pairs where thexcoordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in thex-coordinate is modulo 2 and addition in they-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme:(0,0)↦0(1,1)↦1(0,2)↦2(1,0)↦3(0,1)↦4(1,2)↦5{\displaystyle {\begin{alignedat}{4}(0,0)&\mapsto 0\\(1,1)&\mapsto 1\\(0,2)&\mapsto 2\\(1,0)&\mapsto 3\\(0,1)&\mapsto 4\\(1,2)&\mapsto 5\\\end{alignedat}}}or in general(a,b)↦(3a+4b)mod6.{\displaystyle (a,b)\mapsto (3a+4b)\mod 6.} For example,(1,1)+(1,0)=(0,1),{\displaystyle (1,1)+(1,0)=(0,1),}which translates in the other system as1+3=4.{\displaystyle 1+3=4.} Even though these two groups "look" different in that the sets contain different elements, they are indeedisomorphic: their structures are exactly the same. More generally, thedirect productof twocyclic groupsZm{\displaystyle \mathbb {Z} _{m}}andZn{\displaystyle \mathbb {Z} _{n}}is isomorphic to(Zmn,+){\displaystyle (\mathbb {Z} _{mn},+)}if and only ifmandnarecoprime, per theChinese remainder theorem. If one object consists of a setXwith abinary relationR and the other object consists of a setYwith a binary relation S then an isomorphism fromXtoYis a bijective functionf:X→Y{\displaystyle f:X\to Y}such that:[1]S⁡(f(u),f(v))if and only ifR⁡(u,v){\displaystyle \operatorname {S} (f(u),f(v))\quad {\text{ if and only if }}\quad \operatorname {R} (u,v)} S isreflexive,irreflexive,symmetric,antisymmetric,asymmetric,transitive,total,trichotomous, apartial order,total order,well-order,strict weak order,total preorder(weak order), anequivalence relation, or a relation with any other special properties, if and only if R is. For example, R is anordering≤ and S an ordering⊑,{\displaystyle \scriptstyle \sqsubseteq ,}then an isomorphism fromXtoYis a bijective functionf:X→Y{\displaystyle f:X\to Y}such thatf(u)⊑f(v)if and only ifu≤v.{\displaystyle f(u)\sqsubseteq f(v)\quad {\text{ if and only if }}\quad u\leq v.}Such an isomorphism is called anorder isomorphismor (less commonly) anisotone isomorphism. IfX=Y,{\displaystyle X=Y,}then this is a relation-preservingautomorphism. Inalgebra, isomorphisms are defined for allalgebraic structures. Some are more specifically studied; for example: Just as theautomorphismsof analgebraic structureform agroup, the isomorphisms between two algebras sharing a common structure form aheap. Letting a particular isomorphism identify the two structures turns this heap into a group. Inmathematical analysis, theLaplace transformis an isomorphism mapping harddifferential equationsinto easieralgebraicequations. Ingraph theory, an isomorphism between two graphsGandHis abijectivemapffrom the vertices ofGto the vertices ofHthat preserves the "edge structure" in the sense that there is an edge fromvertexuto vertexvinGif and only if there is an edge fromf(u){\displaystyle f(u)}tof(v){\displaystyle f(v)}inH. Seegraph isomorphism. Inorder theory, an isomorphism between two partially ordered setsPandQis abijectivemapf{\displaystyle f}fromPtoQthat preserves the order structure in the sense that for any elementsx{\displaystyle x}andy{\displaystyle y}ofPwe havex{\displaystyle x}less thany{\displaystyle y}inPif and only iff(x){\displaystyle f(x)}is less thanf(y){\displaystyle f(y)}inQ. As an example, the set {1,2,3,6} of whole numbers ordered by theis-a-factor-ofrelation is isomorphic to the set {O,A,B,AB} ofblood typesordered by thecan-donate-torelation. Seeorder isomorphism. In mathematical analysis, an isomorphism between twoHilbert spacesis a bijection preserving addition, scalar multiplication, and inner product. In early theories oflogical atomism, the formal relationship between facts and true propositions was theorized byBertrand RussellandLudwig Wittgensteinto be isomorphic. An example of this line of thinking can be found in Russell'sIntroduction to Mathematical Philosophy. Incybernetics, thegood regulator theoremor Conant–Ashby theorem is stated as "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system. Incategory theory, given acategoryC, an isomorphism is a morphismf:a→b{\displaystyle f:a\to b}that has an inverse morphismg:b→a,{\displaystyle g:b\to a,}that is,fg=1b{\displaystyle fg=1_{b}}andgf=1a.{\displaystyle gf=1_{a}.} Two categoriesCandDareisomorphicif there existfunctorsF:C→D{\displaystyle F:C\to D}andG:D→C{\displaystyle G:D\to C}which are mutually inverse to each other, that is,FG=1D{\displaystyle FG=1_{D}}(the identity functor onD) andGF=1C{\displaystyle GF=1_{C}}(the identity functor onC). In aconcrete category(roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as thecategory of topological spacesor categories of algebraic objects (like thecategory of groups, thecategory of rings, and thecategory of modules), an isomorphism must be bijective on theunderlying sets. In algebraic categories (specifically, categories ofvarieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces). Since a composition of isomorphisms is an isomorphism, since the identity is an isomorphism and since the inverse of an isomorphism is an isomorphism, the relation that two mathematical objects are isomorphic is anequivalence relation. Anequivalence classgiven by isomorphisms is commonly called anisomorphism class.[2] Examples of isomorphism classes are plentiful in mathematics. However, there are circumstances in which the isomorphism class of an object conceals vital information about it. Although there are cases where isomorphic objects can be considered equal, one must distinguishequalityandisomorphism.[3]Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure. For example, the setsA={x∈Z∣x2<2}andB={−1,0,1}{\displaystyle A=\left\{x\in \mathbb {Z} \mid x^{2}<2\right\}\quad {\text{ and }}\quad B=\{-1,0,1\}}areequal; they are merely different representations—the first anintensionalone (inset builder notation), and the secondextensional(by explicit enumeration)—of the same subset of the integers. By contrast, the sets{A,B,C}{\displaystyle \{A,B,C\}}and{1,2,3}{\displaystyle \{1,2,3\}}are notequalsince they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is while another is and no one isomorphism is intrinsically better than any other.[note 1]On this view and in this sense, these two sets are not equal because one cannot consider themidentical: one can choose an isomorphism between them, but that is a weaker claim than identity and valid only in the context of the chosen isomorphism. Also,integersandeven numbersare isomorphic asordered setsandabelian groups(for addition), but cannot be considered equal sets, since one is aproper subsetof the other. On the other hand, when sets (or othermathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions ofuniversal properties. For example, therational numbersare formally defined asequivalence classesof pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form afieldthat contains the integers and does not contain any proper subfield. Given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. Thereal numbersthat can be expressed as a quotient of integers form the smallest subfield of the reals. There is thus a unique isomorphism from this subfield of the reals to the rational numbers defined by equivalence classes.
https://en.wikipedia.org/wiki/Isomorphism
Inclassical logic,disjunctive syllogism[1][2](historically known asmodus tollendo ponens(MTP),[3]Latinfor "mode that affirms by denying")[4]is avalidargument formwhich is asyllogismhaving adisjunctive statementfor one of itspremises.[5][6] An example inEnglish: Inpropositional logic,disjunctive syllogism(also known asdisjunction eliminationandor elimination, or abbreviated∨E),[7][8][9][10]is a validrule of inference. If it is known that at least one of two statements is true, and that it is not the former that is true; we caninferthat it has to be the latter that is true. Equivalently, ifPis true orQis true andPis false, thenQis true. The name "disjunctive syllogism" derives from its being a syllogism, a three-stepargument, and the use of a logical disjunction (any "or" statement.) For example, "P or Q" is a disjunction, where P and Q are called the statement'sdisjuncts. The rule makes it possible to eliminate adisjunctionfrom alogical proof. It is the rule that where the rule is that whenever instances of "P∨Q{\displaystyle P\lor Q}", and "¬P{\displaystyle \neg P}" appear on lines of a proof, "Q{\displaystyle Q}" can be placed on a subsequent line. Disjunctive syllogism is closely related and similar tohypothetical syllogism, which is another rule of inference involving a syllogism. It is also related to thelaw of noncontradiction, one of thethree traditional laws of thought. For alogical systemthat validates it, thedisjunctive syllogismmay be written insequentnotation as where⊢{\displaystyle \vdash }is ametalogicalsymbol meaning thatQ{\displaystyle Q}is asyntactic consequenceofP∨Q{\displaystyle P\lor Q}, and¬P{\displaystyle \lnot P}. It may be expressed as a truth-functionaltautologyortheoremin the object language of propositional logic as whereP{\displaystyle P}, andQ{\displaystyle Q}are propositions expressed in someformal system. Here is an example: Here is another example: Modus tollendo ponenscan be made stronger by usingexclusive disjunctioninstead of inclusive disjunction as a premise: Unlikemodus ponensandmodus ponendo tollens, with which it should not be confused, disjunctive syllogism is often not made an explicit rule or axiom oflogical systems, as the above arguments can be proven with a combination ofreductio ad absurdumanddisjunction elimination. Other forms of syllogism include: Disjunctive syllogism holds in classical propositional logic andintuitionistic logic, but not in someparaconsistent logics.[11]
https://en.wikipedia.org/wiki/Disjunctive_syllogism
Speech synthesisis the artificial production of humanspeech. A computer system used for this purpose is called aspeech synthesizer, and can be implemented insoftwareorhardwareproducts. Atext-to-speech(TTS) system converts normal language text into speech; other systems rendersymbolic linguistic representationslikephonetic transcriptionsinto speech.[1]The reverse process isspeech recognition. Synthesized speech can be created byconcatenatingpieces of recorded speech that are stored in adatabase. Systems differ in the size of the stored speech units; a system that storesphonesordiphonesprovides the largest output range, but may lack clarity.[citation needed]For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of thevocal tractand other human voice characteristics to create a completely "synthetic" voice output.[2] The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people withvisual impairmentsorreading disabilitiesto listen to written words on a home computer. Many computeroperating systemshave included speech synthesizers since the early 1990s.[citation needed] A text-to-speech system (or "engine") is composed of two parts:[3]afront-endand aback-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often calledtext normalization,pre-processing, ortokenization. The front-end then assignsphonetic transcriptionsto each word, and divides and marks the text intoprosodic units, likephrases,clauses, andsentences. The process of assigning phonetic transcriptions to words is calledtext-to-phonemeorgrapheme-to-phonemeconversion. Phonetic transcriptions andprosodyinformation together make up the symbolic linguistic representation that is output by the front-end. The back-end—often referred to as thesynthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of thetarget prosody(pitch contour, phoneme durations),[4]which is then imposed on the output speech. Long before the invention ofelectronicsignal processing, some people tried to build machines to emulate human speech.[citation needed]There were also legends of the existence of "Brazen Heads", such as those involving PopeSilvester II(d. 1003 AD),Albertus Magnus(1198–1280), andRoger Bacon(1214–1294). In 1779, theGerman-DanishscientistChristian Gottlieb Kratzensteinwon the first prize in a competition announced by the RussianImperial Academy of Sciences and Artsfor models he built of the humanvocal tractthat could produce the five longvowelsounds (inInternational Phonetic Alphabetnotation:[aː],[eː],[iː],[oː]and[uː]).[5]There followed thebellows-operated "acoustic-mechanical speech machine" ofWolfgang von KempelenofPressburg, Hungary, described in a 1791 paper.[6]This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837,Charles Wheatstoneproduced a "speaking machine" based on von Kempelen's design, and in 1846, Joseph Faber exhibited the "Euphonia". In 1923, Paget resurrected Wheatstone's design.[7] In the 1930s,Bell Labsdeveloped thevocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder,Homer Dudleydeveloped a keyboard-operated voice-synthesizer calledThe Voder(Voice Demonstrator), which he exhibited at the1939 New York World's Fair. Dr. Franklin S. Cooperand his colleagues atHaskins Laboratoriesbuilt thePattern playbackin the late 1940s and completed it in 1950. There were several different versions of this hardware device; only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device,Alvin Libermanand colleagues discovered acoustic cues for the perception ofphoneticsegments (consonants and vowels). The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umedaet al.developed the first general English text-to-speech system in 1968, at theElectrotechnical Laboratoryin Japan.[8]In 1961, physicistJohn Larry Kelly, Jrand his colleagueLouis Gerstman[9]used anIBM 704computer to synthesize speech, an event among the most prominent in the history ofBell Labs.[citation needed]Kelly's voice recorder synthesizer (vocoder) recreated the song "Daisy Bell", with musical accompaniment fromMax Mathews. Coincidentally,Arthur C. Clarkewas visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel2001: A Space Odyssey,[10]where theHAL 9000computer sings the same song as astronautDave Bowmanputs it to sleep.[11]Despite the success of purely electronic speech synthesis, research into mechanical speech-synthesizers continues.[12][independent source needed] Linear predictive coding(LPC), a form ofspeech coding, began development with the work ofFumitada ItakuraofNagoya Universityand Shuzo Saito ofNippon Telegraph and Telephone(NTT) in 1966. Further developments in LPC technology were made byBishnu S. AtalandManfred R. SchroederatBell Labsduring the 1970s.[13]LPC was later the basis for early speech synthesizer chips, such as theTexas Instruments LPC Speech Chipsused in theSpeak & Spelltoys from 1978. In 1975, Fumitada Itakura developed theline spectral pairs(LSP) method for high-compression speech coding, while at NTT.[14][15][16]From 1975 to 1981, Itakura studied problems in speech analysis and synthesis based on the LSP method.[16]In 1980, his team developed an LSP-based speech synthesizer chip. LSP is an important technology for speech synthesis and coding, and in the 1990s was adopted by almost all international speech coding standards as an essential component, contributing to the enhancement of digital speech communication over mobile channels and the internet.[15] In 1975,MUSAwas released, and was one of the first Speech Synthesis systems. It consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was also able to sing Italian in an "a cappella" style.[17] Dominant systems in the 1980s and 1990s were theDECtalksystem, based largely on the work ofDennis Klattat MIT, and the Bell Labs system;[18]the latter was one of the first multilingual language-independent systems, making extensive use ofnatural language processingmethods. Handheldelectronics featuring speech synthesis began emerging in the 1970s. One of the first was theTelesensory Systems Inc.(TSI)Speech+portable calculator for the blind in 1976.[19][20]Other devices had primarily educational purposes, such as theSpeak & Spell toyproduced byTexas Instrumentsin 1978.[21]Fidelity released a speaking version of its electronic chess computer in 1979.[22]The firstvideo gameto feature speech synthesis was the 1980shoot 'em uparcade game,Stratovox(known in Japan asSpeak & Rescue), fromSun Electronics.[23][24]The firstpersonal computer gamewith speech synthesis wasManbiki Shoujo(Shoplifting Girl), released in 1980 for thePET 2001, for which the game's developer, Hiroshi Suzuki, developed a "zero cross" programming technique to produce a synthesized speech waveform.[25]Another early example, the arcade version ofBerzerk, also dates from 1980. TheMilton Bradley Companyproduced the first multi-playerelectronic gameusing voice synthesis,Milton, in the same year. In 1976, Computalker Consultants released their CT-1 Speech Synthesizer. Designed by D. Lloyd Rice and Jim Cooper, it was an analog synthesizer built to work with microcomputers using the S-100 bus standard.[26] Early electronic speech-synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but as of 2016[update]output from contemporary speech synthesis systems remains clearly distinguishable from actual human speech. Synthesized voices typically sounded male until 1990, whenAnn Syrdal, atAT&T Bell Laboratories, created a female voice.[27] Kurzweil predicted in 2005 that as thecost-performance ratiocaused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs.[28] The most important qualities of a speech synthesis system arenaturalnessandintelligibility.[29]Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics. The two primary technologies generating synthetic speech waveforms areconcatenative synthesisandformantsynthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used. Concatenative synthesis is based on the concatenation (stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis. Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individualphones,diphones, half-phones,syllables,morphemes,words,phrases, andsentences. Typically, the division into segments is done using a specially modifiedspeech recognizerset to a "forced alignment" mode with some manual correction afterward, using visual representations such as thewaveformandspectrogram.[30]Anindexof the units in the speech database is then created based on the segmentation and acoustic parameters like thefundamental frequency(pitch), duration, position in the syllable, and neighboring phones. Atrun time, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighteddecision tree. Unit selection provides the greatest naturalness, because it applies only a small amount ofdigital signal processing(DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into thegigabytesof recorded data, representing dozens of hours of speech.[31]Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database.[32]Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems.[33] Diphone synthesis uses a minimal speech database containing all thediphones(sound-to-sound transitions) occurring in a language. The number of diphones depends on thephonotacticsof the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the targetprosodyof a sentence is superimposed on these minimal units by means ofdigital signal processingtechniques such aslinear predictive coding,PSOLA[34]orMBROLA.[35]or more recent techniques such as pitch modification in the source domain usingdiscrete cosine transform.[36]Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining,[citation needed]although it continues to be used in research because there are a number of freely available software implementations. An early example of Diphone synthesis is a teaching robot,Leachim, that was invented byMichael J. Freeman.[37]Leachim contained information regarding class curricular and certain biographical information about the students whom it was programmed to teach.[38]It was tested in a fourth grade classroom inthe Bronx, New York.[39][40] Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports.[41]The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings.[citation needed] Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, innon-rhoticdialects of English the"r"in words like"clear"/ˈklɪə/is usually only pronounced when the following word has a vowel as its first letter (e.g."clear out"is realized as/ˌklɪəɹˈʌʊt/). Likewise inFrench, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect calledliaison. Thisalternationcannot be reproduced by a simple word-concatenation system, which would require additional complexity to becontext-sensitive. Formantsynthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created usingadditive synthesisand an acoustic model (physical modelling synthesis).[42]Parameters such asfundamental frequency,voicing, andnoiselevels are varied over time to create awaveformof artificial speech. This method is sometimes calledrules-based synthesis; however, many concatenative systems also have rules-based components. Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using ascreen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used inembedded systems, wherememoryandmicroprocessorpower are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies andintonationscan be output, conveying not just questions and statements, but a variety of emotions and tones of voice. Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for theTexas InstrumentstoySpeak & Spell, and in the early 1980sSegaarcademachines[43]and in manyAtari, Inc.arcade games[44]using theTMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces.[45][when?] Articulatory synthesis consists of computational techniques for synthesizing speech based on models of the humanvocal tractand the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed atHaskins Laboratoriesin the mid-1970s byPhilip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed atBell Laboratoriesin the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues. Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is theNeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of theUniversity of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started bySteve Jobsin the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing asgnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model". More recent synthesizers, developed by Jorge C. Lucero and colleagues, incorporate models of vocal fold biomechanics, glottal aerodynamics and acoustic wave propagation in the bronchi, trachea, nasal and oral cavities, and thus constitute full systems of physics-based speech simulation.[46][47] HMM-based synthesis is a synthesis method based onhidden Markov models, also called Statistical Parametric Synthesis. In this system, thefrequency spectrum(vocal tract),fundamental frequency(voice source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speechwaveformsare generated from HMMs themselves based on themaximum likelihoodcriterion.[48] Sinewave synthesis is a technique for synthesizing speech by replacing theformants(main bands of energy) with pure tone whistles.[49] Deep learning speech synthesis usesdeep neural networks(DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder). The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text. 15.aiuses amulti-speaker model—hundreds of voices are trained concurrently rather than sequentially, decreasing the required training time and enabling the model to learn and generalize shared emotional context, even for voices with no exposure to such emotional context.[50]Thedeep learningmodel used by the application isnondeterministic: each time that speech is generated from the same string of text, the intonation of the speech will be slightly different. The application also supports manually altering theemotionof a generated line usingemotional contextualizers(a term coined by this project), a sentence or phrase that conveys the emotion of the take that serves as a guide for the model during inference.[51][52] ElevenLabsis primarily known for itsbrowser-based, AI-assisted text-to-speech software, Speech Synthesis, which can produce lifelike speech by synthesizingvocal emotionandintonation.[53]The company states its software is built to adjust the intonation and pacing of delivery based on the context of language input used.[54]It uses advanced algorithms to analyze the contextual aspects of text, aiming to detect emotions like anger, sadness, happiness, or alarm, which enables the system to understand the user's sentiment,[55]resulting in a more realistic and human-like inflection. Other features include multilingual speech generation and long-form content creation with contextually-aware voices.[56][57] The DNN-based speech synthesizers are approaching the naturalness of the human voice. Examples of disadvantages of the method are low robustness when the data are not sufficient, lack of controllability and low performance in auto-regressive models. For tonal languages, such as Chinese or Taiwanese language, there are different levels oftone sandhirequired and sometimes the output of speech synthesizer may result in the mistakes of tone sandhi.[58] In 2023,VICEreporterJoseph Coxpublished findings that he had recorded five minutes of himself talking and then used a tool developed by ElevenLabs to create voice deepfakes that defeated a bank'svoice-authenticationsystem.[66] The process of normalizing text is rarely straightforward. Texts are full ofheteronyms,numbers, andabbreviationsthat all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project". Most text-to-speech (TTS) systems do not generatesemanticrepresentations of their input texts, as processes for doing so are unreliable, poorly understood, and computationally ineffective. As a result, variousheuristictechniques are used to guess the proper way to disambiguatehomographs, like examining neighboring words and using statistics about frequency of occurrence. Recently TTS systems have begun to use HMMs (discussedabove) to generate "parts of speech" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required trainingcorporais frequently difficult in these languages. Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words (at least in English), like "1325" becoming "one thousand three hundred twenty-five". However, numbers occur in many different contexts; "1325" may also be read as "one three two five", "thirteen twenty-five" or "thirteen hundred and twenty five". A TTS system can often infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the system provides a way to specify the context if it is ambiguous.[67]Roman numerals can also be read differently depending on context. For example, "Henry VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as "Chapter Eight". Similarly, abbreviations can be ambiguous. For example, the abbreviation "in" for "inches" must be differentiated from the word "in", and the address "12 St John St." uses the same abbreviation for both "Saint" and "Street". TTS systems with intelligent front ends can make educated guesses about ambiguous abbreviations, while others provide the same result in all cases, resulting in nonsensical (and sometimes comical) outputs, such as "Ulysses S. Grant" being rendered as "Ulysses South Grant". Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on itsspelling, a process which is often called text-to-phoneme orgrapheme-to-phoneme conversion (phonemeis the term used bylinguiststo describe distinctive sounds in alanguage). The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correctpronunciationsis stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary. The other approach is rule-based, in which pronunciation rules are applied to words to determine their pronunciations based on their spellings. This is similar to the "sounding out", orsynthetic phonics, approach to learning reading. Each approach has advantages and drawbacks. The dictionary-based approach is quick and accurate, but completely fails if it is given a word which is not in its dictionary. As dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as the system takes into account irregular spellings or pronunciations. (Consider that the word "of" is very common in English, yet is the only word in which the letter "f" is pronounced[v].) As a result, nearly all speech synthesis systems use a combination of these approaches. Languages with aphonemic orthographyhave a very regular writing system, and the prediction of the pronunciation of words based on their spellings is quite successful. Speech synthesis systems for such languages often use the rule-based method extensively, resorting to dictionaries only for those few words, like foreign names and loanwords, whose pronunciations are not obvious from their spellings. On the other hand, speech synthesis systems for languages like English, which have extremely irregular spelling systems, are more likely to rely on dictionaries, and to use rule-based methods only for unusual words, or words that are not in their dictionaries. The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities. Since 2005, however, some researchers have started to evaluate speech synthesis systems using a common speech dataset.[68] A study in the journalSpeech Communicationby Amy Drahota and colleagues at theUniversity of Portsmouth,UK, reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling.[69][70][71]It was suggested that identification of the vocal features that signal emotional content may be used to help make synthesized speech sound more natural. One of the related issues is modification of thepitch contourof the sentence, depending upon whether it is an affirmative, interrogative or exclamatory sentence. One of the techniques for pitch modification[36]usesdiscrete cosine transformin the source domain (linear predictionresidual). Such pitch synchronous pitch modification techniques need a priori pitch marking of the synthesis speech database using techniques such as epoch extraction using dynamicplosionindex applied on the integrated linear prediction residual of thevoicedregions of speech.[72]In general, prosody remains a challenge for speech synthesizers, and is an active research topic. Popular systems offering speech synthesis as a built-in capability. In the early 1980s, TI was known as a pioneer in speech synthesis, and a highly popular plug-in speech synthesizer module was available for the TI-99/4 and 4A. Speech synthesizers were offered free with the purchase of a number of cartridges and were used by many TI-written video games (games offered with speech during this promotion includedAlpinerandParsec). The synthesizer uses a variant of linear predictive coding and has a small in-built vocabulary. The original intent was to release small cartridges that plugged directly into the synthesizer unit, which would increase the device's built-in vocabulary. However, the success of software text-to-speech in the Terminal Emulator II cartridge canceled that plan. TheMattelIntellivisiongame console offered theIntellivoiceVoice Synthesis module in 1982. It included theSP0256 Narratorspeech synthesizer chip on a removable cartridge. The Narrator had 2kB of Read-Only Memory (ROM), and this was utilized to store a database of generic words that could be combined to make phrases in Intellivision games. Since the Orator chip could also accept speech data from external memory, any additional words or phrases needed could be stored inside the cartridge itself. The data consisted of strings of analog-filter coefficients to modify the behavior of the chip's synthetic vocal-tract model, rather than simple digitized samples. Also released in 1982,Software Automatic Mouthwas the first commercial all-software voice synthesis program. It was later used as the basis forMacintalk. The program was available for non-Macintosh Apple computers (including the Apple II, and the Lisa), various Atari models and the Commodore 64. The Apple version preferred additional hardware that contained DACs, although it could instead use the computer's one-bit audio output (with the addition of much distortion) if the card was not present. The Atari made use of the embedded POKEY audio chip. Speech playback on the Atari normally disabled interrupt requests and shut down the ANTIC chip during vocal output. The audible output is extremely distorted speech when the screen is on. The Commodore 64 made use of the 64's embedded SID audio chip. Arguably, the first speech system integrated into anoperating systemwas the circa 1983 unreleased Atari1400XL/1450XLcomputers. These used the Votrax SC01 chip and afinite-state machineto enable World English Spelling text-to-speech synthesis.[74] TheAtari STcomputers were sold with "stspeech.tos" on floppy disk. The first speech system integrated into anoperating systemthat shipped in quantity wasApple Computer'sMacInTalk. The software was licensed from third-party developers Joseph Katz and Mark Barton (later, SoftVoice, Inc.) and was featured during the 1984 introduction of the Macintosh computer. This January demo required 512 kilobytes of RAM memory. As a result, it could not run in the 128 kilobytes of RAM the first Mac actually shipped with.[75]So, the demo was accomplished with a prototype 512k Mac, although those in attendance were not told of this and the synthesis demo created considerable excitement for the Macintosh. In the early 1990s Apple expanded its capabilities offering system wide text-to-speech support. With the introduction of faster PowerPC-based computers they included higher quality voice sampling. Apple also introducedspeech recognitioninto its systems which provided a fluid command set. More recently, Apple has added sample-based voices. Starting as a curiosity, the speech system of AppleMacintoshhas evolved into a fully supported program,PlainTalk, for people with vision problems.VoiceOverwas for the first time featured in 2005 inMac OS X Tiger(10.4). During 10.4 (Tiger) and first releases of 10.5 (Leopard) there was only one standard voice shipping with Mac OS X. Starting with 10.6 (Snow Leopard), the user can choose out of a wide range list of multiple voices. VoiceOver voices feature the taking of realistic-sounding breaths between sentences, as well as improved clarity at high read rates over PlainTalk. Mac OS X also includessay, acommand-line basedapplication that converts text to audible speech. TheAppleScriptStandard Additions includes a say verb that allows a script to use any of the installed voices and to control the pitch, speaking rate and modulation of the spoken text. Used inAlexaand asSoftware as a Servicein AWS[76](from 2017). The second operating system to feature advanced speech synthesis capabilities wasAmigaOS, introduced in 1985. The voice synthesis was licensed byCommodore Internationalfrom SoftVoice, Inc., who also developed the originalMacinTalktext-to-speech system. It featured a complete system of voice emulation for American English, with both male and female voices and "stress" indicator markers, made possible through theAmiga's audiochipset.[77]The synthesis system was divided into a translator library which converted unrestricted English text into a standard set of phonetic codes and a narrator device which implemented a formant model of speech generation.. AmigaOS also featured a high-level "Speak Handler", which allowed command-line users to redirect text output to speech. Speech synthesis was occasionally used in third-party programs, particularly word processors and educational software. The synthesis software remained largely unchanged from the first AmigaOS release and Commodore eventually removed speech synthesis support from AmigaOS 2.1 onward. Despite the American English phoneme limitation, an unofficial version with multilingual speech synthesis was developed. This made use of an enhanced version of the translator library which could translate a number of languages, given a set of rules for each language.[78] ModernWindowsdesktop systems can useSAPI 4andSAPI 5components to support speech synthesis andspeech recognition. SAPI 4.0 was available as an optional add-on forWindows 95andWindows 98.Windows 2000addedNarrator, a text-to-speech utility for people who have visual impairment. Third-party programs such as JAWS for Windows, Window-Eyes, Non-visual Desktop Access, Supernova and System Access can perform various text-to-speech tasks such as reading text aloud from a specified website, email account, text document, the Windows clipboard, the user's keyboard typing, etc. Not all programs can use speech synthesis directly.[79]Some programs can use plug-ins, extensions or add-ons to read text aloud. Third-party programs are available that can read text from the system clipboard. Microsoft Speech Serveris a server-based package for voice synthesis and recognition. It is designed for network use withweb applicationsandcall centers. From 1971 to 1996, Votrax produced a number of commercial speech synthesizer components. A Votrax synthesizer was included in the first generation Kurzweil Reading Machine for the Blind. Text-to-speech (TTS) refers to the ability of computers to read text aloud. A TTS engine converts written text to a phonemic representation, then converts the phonemic representation to waveforms that can be output as sound. TTS engines with different languages, dialects and specialized vocabularies are available through third-party publishers.[80] Version 1.6 ofAndroidadded support for speech synthesis (TTS).[81] Currently, there are a number ofapplications,pluginsand gadgets that can read messages directly from ane-mail clientand web pages from aweb browserorGoogle Toolbar. Some specialized software can narrateRSS-feeds. On one hand, online RSS-narrators simplify information delivery by allowing users to listen to their favourite news sources and to convert them topodcasts. On the other hand, on-line RSS-readers are available on almost any personal computer connected to the Internet. Users can download generated audio files to portable devices, e.g. with a help ofpodcastreceiver, and listen to them while walking, jogging or commuting to work. A growing field in Internet based TTS is web-basedassistive technology, e.g. 'Browsealoud' from a UK company andReadspeaker. It can deliver TTS functionality to anyone (for reasons of accessibility, convenience, entertainment or information) with access to a web browser. The non-profit projectPediaphonwas created in 2006 to provide a similar web-based TTS interface to the Wikipedia.[82] Other work is being done in the context of theW3Cthrough the W3C Audio Incubator Group with the involvement of The BBC and Google Inc. Someopen-source softwaresystems are available, such as: At the 2018Conference on Neural Information Processing Systems(NeurIPS) researchers fromGooglepresented the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis', whichtransfers learningfromspeaker verificationto achieve text-to-speech synthesis, that can be made to sound almost like anybody from a speech sample of only 5 seconds.[85] Also researchers fromBaidu Researchpresented avoice cloningsystem with similar aims at the 2018 NeurIPS conference,[86]though the result is rather unconvincing. By 2019 the digital sound-alikes found their way to the hands of criminals asSymantecresearchers know of 3 cases where digital sound-alikes technology has been used for crime.[87][88] This increases the stress on the disinformation situation coupled with the facts that In March 2020, afreewareweb application called 15.ai that generates high-quality voices from an assortment of fictional characters from a variety of media sources was released.[91]Initial characters includedGLaDOSfromPortal,Twilight SparkleandFluttershyfrom the showMy Little Pony: Friendship Is Magic, and theTenth DoctorfromDoctor Who. A number ofmarkup languageshave been established for the rendition of text as speech in anXML-compliant format. The most recent isSpeech Synthesis Markup Language(SSML), which became aW3C recommendationin 2004. Older speech synthesis markup languages include Java Speech Markup Language (JSML) andSABLE. Although each of these was proposed as a standard, none of them have been widely adopted.[citation needed] Speech synthesis markup languages are distinguished from dialogue markup languages.VoiceXML, for example, includes tags related to speech recognition, dialogue management and touchtone dialing, in addition to text-to-speech markup.[citation needed] Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread. It allows environmental barriers to be removed for people with a wide range of disabilities. The longest application has been in the use ofscreen readersfor people with visual impairment, but text-to-speech systems are now commonly used by people withdyslexiaand otherreading disabilitiesas well as by pre-literate children.[92]They are also frequently employed to aid those with severespeech impairmentusually through a dedicatedvoice output communication aid.[93]Work to personalize a synthetic voice to better match a person's personality or historical voice is becoming available.[94]A noted application, of speech synthesis, was theKurzweil Reading Machine for the Blindwhich incorporated text-to-phonetics software based on work fromHaskins Laboratoriesand a black-box synthesizer built byVotrax.[95] Speech synthesis techniques are also used in entertainment productions such as games and animations. In 2007, Animo Limited announced the development of a software application package based on its speech synthesis software FineSpeech, explicitly geared towards customers in the entertainment industries, able to generate narration and lines of dialogue according to user specifications.[96]The application reached maturity in 2008, when NECBiglobeannounced a web service that allows users to create phrases from the voices of characters from the JapaneseanimeseriesCode Geass: Lelouch of the Rebellion R2.[97]15.ai has been frequently used forcontent creationin variousfandoms, including theMy Little Pony: Friendship Is Magicfandom, theTeam Fortress 2fandom, thePortalfandom, and theSpongeBob SquarePantsfandom.[citation needed] Text-to-speech for disability and impaired communication aids have become widely available. Text-to-speech is also finding new applications; for example, speech synthesis combined withspeech recognitionallows for interaction with mobile devices vianatural language processinginterfaces. Some users have also created AIvirtual assistantsusing 15.ai and external voice control software.[51][52] Text-to-speech is also used in second language acquisition. Voki, for instance, is an educational tool created by Oddcast that allows users to create their own talking avatar, using different accents. They can be emailed, embedded on websites or shared on social media. Content creators have used voice cloning tools to recreate their voices for podcasts,[98][99]narration,[54]and comedy shows.[100][101][102]Publishers and authors have also used such software to narrate audiobooks and newsletters.[103][104]Another area of application is AI video creation with talking heads. Webapps and video editors like Elai.io orSynthesiaallow users to create video content involving AI avatars, who are made to speak using text-to-speech technology.[105][106] Speech synthesis is a valuable computational aid for the analysis and assessment of speech disorders. Avoice qualitysynthesizer, developed by Jorge C. Lucero et al. at theUniversity of Brasília, simulates the physics ofphonationand includes models of vocal frequency jitter and tremor, airflow noise and laryngeal asymmetries.[46]The synthesizer has been used to mimic thetimbreofdysphonicspeakers with controlled levels of roughness, breathiness and strain.[47]
https://en.wikipedia.org/wiki/Speech_synthesis
Electrical telegraphyispoint-to-pointdistance communicating via sending electric signals over wire, a system primarily used from the 1840s until the late 20th century. It was the first electricaltelecommunicationssystem and the most widely used of a number of early messaging systems calledtelegraphs, that were devised to send text messages more quickly than physically carrying them.[1][2]Electrical telegraphy can be considered the first example ofelectrical engineering.[3] Electrical telegraphy consisted of two or more geographically separated stations, calledtelegraph offices. The offices were connected by wires, usually supported overhead onutility poles. Many electrical telegraph systems were invented that operated in different ways, but the ones that became widespread fit into two broad categories. First are the needle telegraphs, in which electric current sent down the telegraph line produces electromagnetic force to move a needle-shaped pointer into position over a printed list. Early needle telegraph models used multiple needles, thus requiring multiple wires to be installed between stations. The first commercial needle telegraph system and the most widely used of its type was theCooke and Wheatstone telegraph, invented in 1837. The second category are armature systems, in which the current activates atelegraph sounderthat makes a click; communication on this type of system relies on sending clicks in coded rhythmic patterns. The archetype of this category was the Morse system and the code associated with it, both invented bySamuel Morsein 1838. In 1865, the Morse system became the standard for international communication, using a modified form of Morse's code that had been developed for German railways. Electrical telegraphs were used by the emerging railway companies to provide signals for train control systems, minimizing the chances of trains colliding with each other.[4]This was built around thesignalling block systemin which signal boxes along the line communicate with neighbouring boxes by telegraphic sounding ofsingle-stroke bellsand three-positionneedle telegraphinstruments. In the 1840s, the electrical telegraph supersededoptical telegraphsystems such as semaphores, becoming the standard way to send urgent messages. By the latter half of the century, mostdeveloped nationshad commercial telegraph networks with local telegraph offices in most cities and towns, allowing the public to send messages (calledtelegrams) addressed to any person in the country, for a fee. Beginning in 1850,submarine telegraph cablesallowed for the first rapid communication between people on different continents. The telegraph's nearly-instant transmission of messages across continents – and between continents – had widespread social and economic impacts. The electric telegraph led toGuglielmo Marconi's invention ofwireless telegraphy, the first means ofradiowavetelecommunication, which he began in 1894.[5] In the early 20th century, manual operation of telegraph machines was slowly replaced byteleprinternetworks. Increasing use of thetelephonepushed telegraphy into only a few specialist uses; its use by the general public dwindled to greetings for special occasions. The rise of theInternetandemailin the 1990s largely made dedicated telegraphy networks obsolete. Prior to the electric telegraph, visual systems were used, includingbeacons,smoke signals,flag semaphore, andoptical telegraphsfor visual signals to communicate over distances of land.[6] An auditory predecessor was West Africantalking drums. In the 19th century,Yorubadrummers used talking drums to mimic humantonallanguage[7][8]to communicate complex messages – usually regarding news of birth, ceremonies, and military conflict – over 4–5 mile distances.[9] Possibly the earliest design and conceptualization for a telegraph system was by the BritishpolymathRobert Hooke, who gave a vivid and comprehensive outline of visual telegraphy to theRoyal Societyin a 1684 submission in which he outlined many practical details. The system was largely motivated by military concerns, following the Battle of Vienna in 1683.[10][11] The first official optical telegraph was invented in France in the 18th century byClaude Chappeand his brothers. The Chappe system would stretch nearly 5,000 km with 556 stations and was used until the 1850s.[12] Fromearly studies of electricity, electrical phenomena were known to travel with great speed, and many experimenters worked on the application of electricity tocommunicationsat a distance. All the known effects of electricity – such assparks,electrostatic attraction,chemical changes,electric shocks, and laterelectromagnetism– were applied to the problems of detecting controlled transmissions of electricity at various distances.[13] In 1753, an anonymous writer in theScots Magazinesuggested an electrostatic telegraph. Using one wire for each letter of the alphabet, a message could be transmitted by connecting the wire terminals in turn to an electrostatic machine, and observing the deflection ofpithballs at the far end.[14]The writer has never been positively identified, but the letter was signed C.M. and posted fromRenfrewleading to a Charles Marshall of Renfrew being suggested.[15]Telegraphs employing electrostatic attraction were the basis of early experiments in electrical telegraphy in Europe, but were abandoned as being impractical and were never developed into a useful communication system.[16] In 1774,Georges-Louis Le Sagerealised an early electric telegraph. The telegraph had a separate wire for each of the 26 letters of thealphabetand its range was only between two rooms of his home.[17] In 1800,Alessandro Voltainvented thevoltaic pile, providing acontinuous currentofelectricityfor experimentation. This became a source of a low-voltage current that could be used to produce more distinct effects, and which was far less limited than the momentary discharge of anelectrostatic machine, which withLeyden jarswere the only previously known human-made sources of electricity. Another very early experiment in electrical telegraphy was an "electrochemical telegraph" created by theGermanphysician, anatomist and inventorSamuel Thomas von Sömmeringin 1809, based on an earlier 1804 design by Spanishpolymathand scientistFrancisco Salva Campillo.[18]Both their designs employed multiple wires (up to 35) to represent almost all Latin letters and numerals. Thus, messages could be conveyed electrically up to a few kilometers (in von Sömmering's design), with each of the telegraph receiver's wires immersed in a separate glass tube of acid. An electric current was sequentially applied by the sender through the various wires representing each letter of a message; at the recipient's end, the currents electrolysed the acid in the tubes in sequence, releasing streams of hydrogen bubbles next to each associated letter or numeral. The telegraph receiver's operator would watch the bubbles and could then record the transmitted message.[18]This is in contrast to later telegraphs that used a single wire (with ground return). Hans Christian Ørsteddiscovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle. In the same yearJohann Schweiggerinvented thegalvanometer, with a coil of wire around a compass, that could be used as a sensitive indicator for an electric current.[19]Also that year,André-Marie Ampèresuggested that telegraphy could be achieved by placing small magnets under the ends of a set of wires, one pair of wires for each letter of the alphabet. He was apparently unaware of Schweigger's invention at the time, which would have made his system much more sensitive. In 1825,Peter Barlowtried Ampère's idea but only got it to work over 200 feet (61 m) and declared it impractical. In 1830William Ritchieimproved on Ampère's design by placing the magnetic needles inside a coil of wire connected to each pair of conductors. He successfully demonstrated it, showing the feasibility of the electromagnetic telegraph, but only within a lecture hall.[20] In 1825,William Sturgeoninvented theelectromagnet, with a single winding of uninsulated wire on a piece of varnishediron, which increased the magnetic force produced by electric current.Joseph Henryimproved it in 1828 by placing several windings of insulated wire around the bar, creating a much more powerful electromagnet which could operate a telegraph through the high resistance of long telegraph wires.[21]During his tenure atThe Albany Academyfrom 1826 to 1832, Henry first demonstrated the theory of the 'magnetic telegraph' by ringing a bell through one-mile (1.6 km) of wire strung around the room in 1831.[22] In 1835,Joseph HenryandEdward Davyindependently invented the mercury dippingelectrical relay, in which a magnetic needle is dipped into a pot of mercury when an electric current passes through the surrounding coil.[23][24][25]In 1837, Davy invented the much more practical metallic make-and-break relay which became the relay of choice in telegraph systems and a key component for periodically renewing weak signals.[26]Davy demonstrated his telegraph system inRegent's Parkin 1837 and was granted a patent on 4 July 1838.[27]Davy also invented a printing telegraph which used the electric current from the telegraph signal to mark a ribbon of calico infused withpotassium iodideandcalcium hypochlorite.[28] The first working telegraph was built by the English inventorFrancis Ronaldsin 1816 and used static electricity.[29]At the family home onHammersmith Mall, he set up a complete subterranean system in a 175-yard (160 m) long trench as well as an eight-mile (13 km) long overhead telegraph. The lines were connected at both ends to revolving dials marked with the letters of the alphabet and electrical impulses sent along the wire were used to transmit messages. Offering his invention to theAdmiraltyin July 1816, it was rejected as "wholly unnecessary".[30]His account of the scheme and the possibilities of rapid global communication inDescriptions of an Electrical Telegraph and of some other Electrical Apparatus[31]was the first published work on electric telegraphy and even described the risk ofsignal retardationdue to induction.[32]Elements of Ronalds' design were utilised in the subsequent commercialisation of the telegraph over 20 years later.[33] TheSchilling telegraph, invented byBaron Schillingvon Canstatt in 1832, was an earlyneedle telegraph. It had a transmitting device that consisted of a keyboard with 16 black-and-white keys.[34]These served for switching the electric current. The receiving instrument consisted of sixgalvanometerswith magnetic needles, suspended fromsilkthreads. The two stations of Schilling's telegraph were connected by eight wires; six were connected with the galvanometers, one served for the return current and one for a signal bell. When at the starting station the operator pressed a key, the corresponding pointer was deflected at the receiving station. Different positions of black and white flags on different disks gave combinations which corresponded to the letters or numbers. Pavel Schilling subsequently improved its apparatus by reducing the number of connecting wires from eight to two. On 21 October 1832, Schilling managed a short-distance transmission of signals between two telegraphs in different rooms of his apartment. In 1836, the British government attempted to buy the design but Schilling instead accepted overtures fromNicholas I of Russia. Schilling's telegraph was tested on a 5-kilometre-long (3.1 mi) experimental underground and underwater cable, laid around the building of the main Admiralty in Saint Petersburg and was approved for a telegraph between the imperial palace atPeterhofand the naval base atKronstadt. However, the project was cancelled following Schilling's death in 1837.[35]Schilling was also one of the first to put into practice the idea of thebinarysystem of signal transmission.[34]His work was taken over and developed byMoritz von Jacobiwho invented telegraph equipment that was used by TsarAlexander IIIto connect the Imperial palace atTsarskoye SeloandKronstadt Naval Base. In 1833,Carl Friedrich Gauss, together with the physics professorWilhelm WeberinGöttingen, installed a 1,200-metre-long (3,900 ft) wire above the town's roofs. Gauss combined thePoggendorff-Schweigger multiplicatorwith his magnetometer to build a more sensitive device, thegalvanometer. To change the direction of the electric current, he constructed acommutatorof his own. As a result, he was able to make the distant needle move in the direction set by the commutator on the other end of the line. At first, Gauss and Weber used the telegraph to coordinate time, but soon they developed other signals and finally, their own alphabet. The alphabet was encoded in a binary code that was transmitted by positive or negative voltage pulses which were generated by means of moving an induction coil up and down over a permanent magnet and connecting the coil with the transmission wires by means of the commutator. The page of Gauss's laboratory notebook containing both his code and the first message transmitted, as well as a replica of the telegraph made in the 1850s under the instructions of Weber are kept in the faculty of physics at theUniversity of Göttingen, in Germany. Gauss was convinced that this communication would be of help to his kingdom's towns. Later in the same year, instead of avoltaic pile, Gauss used aninductionpulse, enabling him to transmit seven letters a minute instead of two. The inventors and university did not have the funds to develop the telegraph on their own, but they received funding fromAlexander von Humboldt.Carl August SteinheilinMunichwas able to build a telegraph network within the city in 1835–1836. In 1838, Steinheil installed a telegraph along theNuremberg–Fürth railway line, built in 1835 as the first German railroad, which was the firstearth-return telegraphput into service. By 1837,William Fothergill CookeandCharles Wheatstonehad co-developed atelegraph systemwhich used a number of needles on a board that could be moved to point to letters of the alphabet. Any number of needles could be used, depending on the number of characters it was required to code. In May 1837 they patented their system. The patent recommended five needles, which coded twenty of the alphabet's 26 letters. Samuel Morseindependently developed and patented a recording electric telegraph in 1837. Morse's assistantAlfred Vaildeveloped an instrument that was called the register for recording the received messages. It embossed dots and dashes on a moving paper tape by a stylus which was operated by an electromagnet.[36]Morse and Vail developed theMorse codesignallingalphabet. On 24 May 1844, Morse sent to Vail the historic first message “WHAT HATH GOD WROUGHT" from theCapitolin Washington to theold Mt. Clare DepotinBaltimore.[37][38] The first commercial electrical telegraph was theCooke and Wheatstone system. A demonstration four-needle system was installed on theEustontoCamden Townsection ofRobert Stephenson'sLondon and Birmingham Railwayin 1837 for signalling rope-hauling of locomotives.[39]It was rejected in favour of pneumatic whistles.[40]Cooke and Wheatstone had their first commercial success with a system installed on theGreat Western Railwayover the 13 miles (21 km) fromPaddington stationtoWest Draytonin 1838.[41]This was a five-needle, six-wire[40]system, and had the major advantage of displaying the letter being sent so operators did not need to learn a code. The insulation failed on the underground cables between Paddington and West Drayton,[42][43]and when the line was extended toSloughin 1843, the system was converted to a one-needle, two-wire configuration with uninsulated wires on poles.[44]The cost of installing wires was ultimately more economically significant than the cost of training operators. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were in use at the end of the nineteenth century; some remained in service in the 1930s.[45]TheElectric Telegraph Company, the world's first public telegraphy company, was formed in 1845 by financierJohn Lewis Ricardoand Cooke.[46][47] Wheatstone developed a practical alphabetical system in 1840 called the A.B.C. System, used mostly on private wires. This consisted of a "communicator" at the sending end and an "indicator" at the receiving end. The communicator consisted of a circular dial with a pointer and the 26 letters of the alphabet (and four punctuation marks) around its circumference. Against each letter was a key that could be pressed. A transmission would begin with the pointers on the dials at both ends set to the start position. The transmitting operator would then press down the key corresponding to the letter to be transmitted. In the base of the communicator was amagnetoactuated by a handle on the front. This would be turned to apply an alternating voltage to the line. Each half cycle of the current would advance the pointers at both ends by one position. When the pointer reached the position of the depressed key, it would stop and the magneto would be disconnected from the line. The communicator's pointer was geared to the magneto mechanism. The indicator's pointer was moved by a polarised electromagnet whosearmaturewas coupled to it through anescapement. Thus the alternating line voltage moved the indicator's pointer on to the position of the depressed key on the communicator. Pressing another key would then release the pointer and the previous key, and re-connect the magneto to the line.[48]These machines were very robust and simple to operate, and they stayed in use in Britain until well into the 20th century.[49][50] The Morse system uses a single wire between offices. At the sending station, an operator taps on a switch called atelegraph key, spelling out text messages inMorse code. Originally, the armature was intended to make marks on paper tape, but operators learned to interpret the clicks and it was more efficient to write down the message directly. In 1851, a conference in Vienna of countries in the German-Austrian Telegraph Union (which included many central European countries) adopted the Morse telegraph as the system for international communications.[51]Theinternational Morse codeadopted was considerably modified from the originalAmerican Morse code, and was based on a code used on Hamburg railways (Gerke, 1848).[52]A common code was a necessary step to allow direct telegraph connection between countries. With different codes, additional operators were required to translate and retransmit the message. In 1865, a conference in Paris adopted Gerke's code as the International Morse code and was henceforth the international standard. The US, however, continued to use American Morse code internally for some time, hence international messages required retransmission in both directions.[53] In the United States, the Morse/Vail telegraph wasquickly deployed in the two decades following the first demonstrationin 1844. Theoverland telegraphconnected the west coast of the continent to the east coast by 24 October 1861, bringing an end to thePony Express.[54] France was slow to adopt the electrical telegraph, because of the extensiveoptical telegraphsystem built during theNapoleonic era. There was also serious concern that an electrical telegraph could be quickly put out of action by enemy saboteurs, something that was much more difficult to do with optical telegraphs which had no exposed hardware between stations. TheFoy-Breguet telegraphwas eventually adopted. This was a two-needle system using two signal wires but displayed in a uniquely different way to other needle telegraphs. The needles made symbols similar to theChappeoptical system symbols, making it more familiar to the telegraph operators. The optical system was decommissioned starting in 1846, but not completely until 1855. In that year the Foy-Breguet system was replaced with the Morse system.[55] As well as the rapid expansion of the use of the telegraphs along the railways, they soon spread into the field of mass communication with the instruments being installed inpost offices. The era of mass personal communication had begun. Telegraph networks were expensive to build, but financing was readily available, especially from London bankers. By 1852, National systems were in operation in major countries:[56][57] The New York and Mississippi Valley Printing Telegraph Company, for example, was created in 1852 in Rochester, New York and eventually became theWestern Union Telegraph Company.[60]Although many countries had telegraph networks, there was noworldwideinterconnection. Message by post was still the primary means of communication to countries outside Europe. Telegraphy was introduced inCentral Asiaduring the 1870s.[62] A continuing goal in telegraphy was to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to reduce the cost per message was the development oftelegraphese. The first system that did not require skilled technicians to operate was Charles Wheatstone's ABC system in 1840 in which the letters of the alphabet were arranged around a clock-face, and the signal caused a needle to indicate the letter. This early system required the receiver to be present in real time to record the message and it reached speeds of up to 15 words a minute. In 1846,Alexander Bainpatented a chemical telegraph in Edinburgh. The signal current moved an iron pen across a moving paper tape soaked in a mixture of ammonium nitrate and potassium ferrocyanide, decomposing the chemical and producing readable blue marks in Morse code. The speed of the printing telegraph was 16 and a half words per minute, but messages still required translation into English by live copyists. Chemical telegraphy came to an end in the US in 1851, when the Morse group defeated the Bain patent in the US District Court.[63] For a brief period, starting with the New York–Boston line in 1848, some telegraph networks began to employ sound operators, who were trained to understand Morse code aurally. Gradually, the use of sound operators eliminated the need for telegraph receivers to include register and tape. Instead, the receiving instrument was developed into a "sounder", an electromagnet that was energized by a current and attracted a small iron lever. When the sounding key was opened or closed, the sounder lever struck an anvil. The Morse operator distinguished a dot and a dash by the short or long interval between the two clicks. The message was then written out in long-hand.[64] Royal Earl Housedeveloped and patented a letter-printing telegraph system in 1846 which employed an alphabetic keyboard for the transmitter and automatically printed the letters on paper at the receiver,[65]and followed this up with a steam-powered version in 1852.[66]Advocates of printing telegraphy said it would eliminate Morse operators' errors. The House machine was used on four main American telegraph lines by 1852. The speed of the House machine was announced as 2600 words an hour.[67] David Edward Hughesinvented the printing telegraph in 1855; it used a keyboard of 26 keys for the alphabet and a spinning type wheel that determined the letter being transmitted by the length of time that had elapsed since the previous transmission. The system allowed for automatic recording on the receiving end. The system was very stable and accurate and became accepted around the world.[68] The next improvement was theBaudot codeof 1874. French engineerÉmile Baudotpatented a printing telegraph in which the signals were translated automatically into typographic characters. Each character was assigned a five-bit code, mechanically interpreted from the state of five on/off switches. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute.[69] By this point, reception had been automated, but the speed and accuracy of the transmission were still limited to the skill of the human operator. The first practical automated system was patented by Charles Wheatstone. The message (inMorse code) was typed onto a piece of perforated tape using a keyboard-like device called the 'Stick Punch'. The transmitter automatically ran the tape through and transmitted the message at the then exceptionally high speed of 70 words per minute. An early successfulteleprinterwas invented byFrederick G. Creed. InGlasgowhe created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals onto paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by theDaily Mailfor daily transmission of the newspaper contents. With the invention of theteletypewriter, telegraphic encoding became fully automated. Early teletypewriters used the ITA-1Baudot code, a five-bit code. This yielded only thirty-two codes, so it was over-defined into two "shifts", "letters" and "figures". An explicit, unshared shift code prefaced each set of letters and figures. In 1901, Baudot's code was modified byDonald Murray. In the 1930s, teleprinters were produced byTeletypein the US,Creedin Britain andSiemensin Germany. By 1935, message routing was the last great barrier to full automation. Large telegraphy providers began to develop systems that usedtelephone-like rotary diallingto connect teletypewriters. These resulting systems were called "Telex" (TELegraph EXchange). Telex machines first performed rotary-telephone-stylepulse diallingforcircuit switching, and then sent data byITA2. This "type A" Telex routing functionally automated message routing. The first wide-coverage Telex network was implemented in Germany during the 1930s[70]as a network used to communicate within the government. At the rate of 45.45 (±0.5%)baud– considered speedy at the time – up to 25 telex channels could share a single long-distance telephone channel by usingvoice frequency telegraphymultiplexing, making telex the least expensive method of reliable long-distance communication. Automatic teleprinter exchange service was introduced into Canada byCPR TelegraphsandCN Telegraphin July 1957 and in 1958,Western Unionstarted to build a Telex network in the United States.[71] The most expensive aspect of a telegraph system was the installation – the laying of the wire, which was often very long. The costs would be better covered by finding a way to send more than one message at a time through the single wire, thus increasing revenue per wire. Early devices included theduplexand thequadruplexwhich allowed, respectively, one or two telegraph transmissions in each direction. However, an even greater number of channels was desired on the busiest lines. In the latter half of the 1800s, several inventors worked towards creating a method for doing just that, includingCharles Bourseul,Thomas Edison,Elisha Gray, andAlexander Graham Bell. One approach was to have resonators of several different frequencies act as carriers of a modulated on-off signal. This was the harmonic telegraph, a form offrequency-division multiplexing. These various frequencies, referred to as harmonics, could then be combined into one complex signal and sent down the single wire. On the receiving end, the frequencies would be separated with a matching set of resonators. With a set of frequencies being carried down a single wire, it was realized that the human voice itself could be transmitted electrically through the wire. This effort led to theinvention of the telephone. (While the work toward packing multiple telegraph signals onto one wire led to telephony, later advances would pack multiple voice signals onto one wire by increasing the bandwidth by modulating frequencies much higher than human hearing. Eventually, the bandwidth was widened much further by using laser light signals sent through fiber optic cables. Fiber optic transmission can carry 25,000 telephone signals simultaneously down a single fiber.[72]) Soon after the first successful telegraph systems were operational, the possibility of transmitting messages across the sea by way ofsubmarine communications cableswas first proposed. One of the primary technical challenges was to sufficiently insulate the submarine cable to prevent the electric current from leaking out into the water. In 1842, a Scottish surgeonWilliam Montgomerie[73]introducedgutta-percha, the adhesive juice of thePalaquium guttatree, to Europe.Michael Faradayand Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid fromDovertoCalais. Gutta-percha was used as insulation on a wire laid across theRhinebetweenDeutzandCologne.[74]In 1849,C. V. Walker, electrician to theSouth Eastern Railway, submerged a 2 miles (3.2 km) wire coated with gutta-percha off the coast from Folkestone, which was tested successfully.[73] John Watkins Brett, an engineer fromBristol, sought and obtained permission fromLouis-Philippein 1847 to establishtelegraphic communicationbetween France and England. The first undersea cable was laid in 1850, connecting the two countries and was followed by connections to Ireland and the Low Countries. TheAtlantic Telegraph Companywas formed inLondonin 1856 to undertake to construct a commercial telegraph cable across the Atlantic Ocean. It was successfully completed on 18 July 1866 by the shipSSGreat Eastern, captained bySir James Anderson, after many mishaps along the way.[75]John Pender, one of the men on the Great Eastern, later founded several telecommunications companies primarily laying cables between Britain and Southeast Asia.[76]Earlier transatlanticsubmarine cablesinstallations were attempted in 1857, 1858 and 1865. The 1857 cable only operated intermittently for a few days or weeks before it failed. The study of underwater telegraph cables accelerated interest in mathematical analysis of very longtransmission lines. The telegraph lines from Britain to India were connected in 1870. (Those several companies combined to form theEastern Telegraph Companyin 1872.) The HMSChallengerexpedition in 1873–1876 mapped the ocean floor for future underwater telegraph cables.[77] Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin.[78]This brought news reports from the rest of the world.[79]The telegraph across the Pacific was completed in 1902, finally encircling the world. From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as theAll Red Line.[80]In 1896, there were thirty cable laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent.[81] Cable & Wirelesswas a British telecommunications company that traced its origins back to the 1860s, with SirJohn Penderas the founder,[82]although the name was only adopted in 1934. It was formed from successive mergers including: Main article § Section:History of longitude § Land surveying and telegraphy. The telegraph was very important for sending time signals to determine longitude, providing greater accuracy than previously available. Longitude was measured by comparing local time (for example local noon occurs when the sun is at its highest above the horizon) with absolute time (a time that is the same for an observer anywhere on earth). If the local times of two places differ by one hour, the difference in longitude between them is 15° (360°/24h). Before telegraphy, absolute time could be obtained from astronomical events, such aseclipses,occultationsorlunar distances, or by transporting an accurate clock (achronometer) from one location to the other. The idea of using the telegraph to transmit a time signal for longitude determination was suggested byFrançois AragotoSamuel Morsein 1837,[85]and the first test of this idea was made byCapt. Wilkesof the U.S. Navy in 1844, over Morse's line between Washington and Baltimore.[86]The method was soon in practical use for longitude determination, in particular by the U.S. Coast Survey, and over longer and longer distances as the telegraph network spread across North America and the world, and as technical developments improved accuracy and productivity[87]: 318–330[88]: 98–107 The "telegraphic longitude net"[89]soon became worldwide. Transatlantic links between Europe and North America were established in 1866 and 1870. The US Navy extended observations into the West Indies and Central and South America with an additional transatlantic link from South America to Lisbon between 1874 and 1890.[90][91][92][93]British, Russian and US observations created a chain from Europe through Suez, Aden, Madras, Singapore, China and Japan, to Vladivostok, thence to Saint Petersburg and back to Western Europe.[94] Australia's telegraph network was linked to Singapore's via Java in 1871,[95]and the net circled the globe in 1902 with the connection of the Australia and New Zealand networks to Canada's via theAll Red Line. The two determinations of longitudes, one transmitted from east to west and the other from west to east, agreed within one second of arc (1⁄15second of time – less than 30 metres).[96] The ability to send telegrams brought obvious advantages to those conducting war. Secret messages were encoded, so interception alone would not be sufficient for the opposing side to gain an advantage. There were also geographical constraints on intercepting the telegraph cables that improved security, however once radio telegraphy was developed interception became far more widespread. TheCrimean Warwas one of the first conflicts to usetelegraphsand was one of the first to be documented extensively. In 1854, the government in London created a military Telegraph Detachment for the Army commanded by an officer of theRoyal Engineers. It was to comprise twenty-five men from the Royal Corps of Sappers & Miners trained by the Electric Telegraph Company to construct and work the first field electric telegraph.[97] Journalistic recording of the war was provided byWilliam Howard Russell(writing forThe Timesnewspaper) with photographs byRoger Fenton.[98]News from war correspondents kept the public of the nations involved in the war informed of the day-to-day events in a way that had not been possible in any previous war. After the French extended their telegraph lines to the coast of the Black Sea in late 1854, war news began reachingLondonin two days. When the British laid an underwater cable to the Crimean peninsula in April 1855, news reached London in a few hours. These prompt daily news reports energised British public opinion on the war, which brought down the government and led to Lord Palmerston becoming prime minister.[99] During theAmerican Civil Warthe telegraph proved its value as a tactical, operational, and strategic communication medium and an important contributor to Union victory.[100]By contrast the Confederacy failed to make effective use of the South's much smaller telegraph network. Prior to the War the telegraph systems were primarily used in the commercial sector. Government buildings were not inter-connected with telegraph lines, but relied on runners to carry messages back and forth.[101]Before the war the Government saw no need to connect lines within city limits, however, they did see the use in connections between cities. Washington D.C. being the hub of government, it had the most connections, but there were only a few lines running north and south out of the city.[101]It was not until the Civil War that the government saw the true potential of the telegraph system. Soon after the shelling ofFort Sumter, the South cut telegraph lines running into D.C., which put the city in a state of panic because they feared an immediate Southern invasion.[102][101] Within 6 months of the start of the war, theU.S. Military Telegraph Corps(USMT) had laid approximately 300 miles (480 km) of line. By war's end they had laid approximately 15,000 miles (24,000 km) of line, 8,000 for military and 5,000 for commercial use, and had handled approximately 6.5 million messages. The telegraph was not only important for communication within the armed forces, but also in the civilian sector, helping political leaders to maintain control over their districts.[102] Even before the war, theAmerican Telegraph Companycensored suspect messages informally to block aid to the secession movement. During the war,Secretary of WarSimon Cameron, and laterEdwin Stanton, wanted control over the telegraph lines to maintain the flow of information. Early in the war, one of Stanton's first acts as Secretary of War was to move telegraph lines from ending atMcClellan'sheadquarters to terminating at the War Department. Stanton himself said "[telegraphy] is my right arm". Telegraphy assisted Northern victories, including theBattle of Antietam(1862), theBattle of Chickamauga(1863), andSherman's March to the Sea(1864).[102] The telegraph system still had its flaws. The USMT, while the main source of telegraphers and cable, was still a civilian agency. Most operators were first hired by the telegraph companies and then contracted out to the War Department. This created tension between generals and their operators. One source of irritation was that USMT operators did not have to follow military authority. Usually they performed without hesitation, but they were not required to, soAlbert Myercreated aU.S. Army Signal Corpsin February 1863. As the new head of the Signal Corps, Myer tried to get all telegraph and flag signaling under his command, and therefore subject to military discipline. After creating the Signal Corps, Myer pushed to further develop new telegraph systems. While the USMT relied primarily on civilian lines and operators, the Signal Corp's new field telegraph could be deployed and dismantled faster than USMT's system.[102] DuringWorld War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide.[103]The British government censored telegraph cable companies in an effort to root out espionage and restrict financial transactions with Central Powers nations.[104]British access to transatlantic cables and its codebreaking expertise led to theZimmermann Telegramincident that contributed to theUS joining the war.[105]Despite British acquisition of German colonies and expansion into the Middle East, debt from the war led to Britain's control over telegraph cables to weaken while US control grew.[106] World War IIrevived the 'cable war' of 1914–1918. In 1939, German-owned cables across the Atlantic were cut once again, and, in 1940, Italian cables to South America and Spain were cut in retaliation for Italian action against two of the five British cables linking Gibraltar and Malta.Electra House, Cable & Wireless's head office and central cable station, was damaged by German bombing in 1941. Resistance movementsin occupied Europe sabotaged communications facilities such as telegraph lines,[107]forcing the Germans to usewireless telegraphy, which could then beinterceptedby Britain. The Germans developed a highly complex teleprinter attachment (German:Schlüssel-Zusatz, "cipher attachment") that was used for enciphering telegrams, using theLorenz cipher, between German High Command (OKW) and the army groups in the field. These contained situation reports, battle plans, and discussions of strategy and tactics. Britain intercepted these signals, diagnosed how the encrypting machine worked, anddecrypteda large amount of teleprinter traffic.[108] In America, the end of the telegraph era can be associated with the fall of theWestern Union Telegraph Company. Western Union was the leading telegraph provider for America and was seen as the best competition for theNational Bell Telephone Company. Western Union and Bell were both invested in telegraphy and telephone technology. Western Union's decision to allow Bell to gain the advantage in telephone technology was the result of Western Union's upper management's failure to foresee the surpassing of the telephone over the, at the time, dominant telegraph system. Western Union soon lost the legal battle for the rights to their telephone copyrights. This led to Western Union agreeing to a lesser position in the telephone competition, which in turn led to the lessening of the telegraph.[102] While the telegraph was not the focus of the legal battles that occurred around 1878, the companies that were affected by the effects of the battle were the main powers of telegraphy at the time. Western Union thought that the agreement of 1878 would solidify telegraphy as the long-range communication of choice. However, due to the underestimates of telegraph's future[further explanation needed]and poor contracts, Western Union found itself declining.[102]AT&Tacquired working control of Western Union in 1909 but relinquished it in 1914 under threat of antitrust action. AT&T bought Western Union's electronic mail andTelexbusinesses in 1990. Although commercial "telegraph" services are still available inmany countries, transmission is usually done via acomputer networkrather than a dedicated wired connection.
https://en.wikipedia.org/wiki/Electrical_telegraph
Network functions virtualization(NFV)[1]is anetwork architectureconcept that leverages ITvirtualizationtechnologies to virtualize entire classes ofnetwork nodefunctions into building blocks that may connect, or chain together, to create and deliver communication services. NFV relies upon traditional server-virtualizationtechniques such as those used in enterprise IT. Avirtualized network function, orVNF, is implemented within one or morevirtual machinesorcontainersrunning different software and processes, on top of commercial off the shelf (COTS) high-volume servers, switches and storage devices, or evencloud computinginfrastructure, instead of having custom hardware appliances for each network function thereby avoiding vendor lock-in. For example, a virtualsession border controllercould be deployed to protect a network without the typical cost and complexity of obtaining and installing physical network protection units. Other examples of NFV include virtualizedload balancers,firewalls,intrusion detection devicesandWAN acceleratorsto name a few.[2] The decoupling of the network function software from the customized hardware platform realizes a flexible network architecture that enables agile network management, fast new service roll outs with significant reduction in CAPEX and OPEX. Product development within the telecommunication industry has traditionally followed rigorous standards for stability, protocol adherence and quality, reflected by the use of the termcarrier gradeto designate equipment demonstrating this high reliability and performance factor.[3]While this model worked well in the past, it inevitably led to long product cycles, a slow pace of development and reliance on proprietary or specific hardware, e.g., bespokeapplication-specific integrated circuits(ASICs). This development model resulted in significant delays when rolling out new services, posed complex interoperability challenges and significant increase in CAPEX/OPEX when scaling network systems & infrastructure and enhancing network service capabilities to meet increasing network load and performance demands. Moreover, the rise of significant competition in communication service offerings from agile organizations operating at large scale on the public Internet (such asGoogle Talk,Skype,Netflix) has spurred service providers to look for innovative ways to disrupt the status quo and increase revenue streams. In October 2012, a group of telecom operators published awhite paper[4]at a conference inDarmstadt, Germany, onsoftware-defined networking(SDN) andOpenFlow. The Call for Action concluding the White Paper led to the creation of the Network Functions Virtualization (NFV) Industry Specification Group (ISG)[5]within theEuropean Telecommunications Standards Institute(ETSI). The ISG was made up of representatives from the telecommunication industry from Europe and beyond.[6][7]ETSI ISG NFV addresses many aspects, including functional architecture, information model, data model, protocols, APIs, testing, reliability, security, future evolutions, etc. The ETSI ISG NFV has announced the Release 5 of its specifications since May 2021 aiming to produce new specifications and extend the already published specifications based on new features and enhancements. Since the publication of the white paper, the group has produced over 100 publications,[8]which have gained wider acceptance in the industry and are being implemented in prominent open source projects like OpenStack, ONAP, Open Source MANO (OSM) to name a few. Due to active cross-liaison activities, the ETSI NFV specifications are also being referenced in other SDOs like 3GPP, IETF, ETSI MEC etc. The NFV framework consists of three main components:[9] The building block for both the NFVI and the NFV-MANO is the NFV platform. In the NFVI role, it consists of both virtual and physical processing and storage resources, and virtualization software. In its NFV-MANO role it consists of VNF and NFVI managers and virtualization software operating on ahardware controller. The NFV platform implements carrier-grade features used to manage and monitor the platform components, recover from failures and provide effective security – all required for the public carrier network. A service provider that follows the NFV design implements one or more virtualized network functions, orVNFs. A VNF by itself does not automatically provide a usable product or service to the provider's customers. To build more complex services, the notion ofservice chainingis used, where multiple VNFs are used in sequence to deliver a service. Another aspect of implementing NFV is theorchestrationprocess. To build highly reliable and scalable services, NFV requires that the network be able to instantiate VNF instances, monitor them, repair them, and (most important for a service provider business) bill for the services rendered. These attributes, referred to as carrier-grade[11]features, are allocated to an orchestration layer in order to provide high availability and security, and low operation and maintenance costs. Importantly, the orchestration layer must be able to manage VNFs irrespective of the underlying technology within the VNF. For example, an orchestration layer must be able to manage anSBCVNF from vendor X running onVMware vSpherejust as well as anIMSVNF from vendor Y running on KVM. The initial perception of NFV was that virtualized capability should be implemented in data centers. This approach works in many – but not all – cases. NFV presumes and emphasizes the widest possible flexibility as to the physical location of the virtualized functions. Ideally, therefore, virtualized functions should be located where they are the most effective and least expensive. That means a service provider should be free to locate NFV in all possible locations, from the data center to the network node to the customer premises. This approach, known as distributed NFV, has been emphasized from the beginning as NFV was being developed and standardized, and is prominent in the recently released NFV ISG documents.[12] For some cases there are clear advantages for a service provider to locate this virtualized functionality at the customer premises. These advantages range from economics to performance to the feasibility of the functions being virtualized.[13] The first ETSI NFV ISG-approved public multi-vendorproof of concept (PoC)of D-NFV was conducted byCyan, Inc.,RAD,Fortinetand Certes Networks inChicagoin June, 2014, and was sponsored byCenturyLink. It was based on RAD's dedicated customer-edge D-NFV equipment running Fortinet's Next Generation Firewall (NGFW) and Certes Networks’ virtual encryption/decryption engine as Virtual Network Functions (VNFs) with Cyan's Blue Planet system orchestrating the entire ecosystem.[14]RAD's D-NFV solution, aLayer 2/Layer 3network termination unit (NTU)equipped with a D-NFVX86server module that functions as a virtualization engine at the customer edge, became commercially available by the end of that month.[15]During 2014 RAD also had organized a D-NFV Alliance, an ecosystem of vendors and internationalsystems integratorsspecializing in new NFV applications.[16] When designing and developing the software that provides the VNFs, vendors may structure that software into software components (implementation view of a software architecture) and package those components into one or more images (deployment view of a software architecture). These vendor-defined software components are called VNF Components (VNFCs). VNFs are implemented with one or more VNFCs and it is assumed, without loss of generality, that VNFC instances map 1:1 to VM Images. VNFCs should in general be able toscale up and/or scale out. By being able to allocate flexible (virtual) CPUs to each of the VNFC instances, the network management layer can scale up (i.e., scalevertically) the VNFC to provide the throughput/performance and scalability expectations over a single system or a single platform. Similarly, the network management layer can scale out (i.e.,scale horizontally) a VNFC by activating multiple instances of such VNFC over multiple platforms and therefore reach out to the performance and architecture specifications whilst not compromising the other VNFC function stabilities. Early adopters of such architecture blueprints have already implemented the NFV modularity principles.[17] Network Functions Virtualisation is highly complementary to SDN.[4]In essence, SDN is an approach to building data networking equipment and software that separates and abstracts elements of these systems. It does this by decoupling the control plane and data plane from each other, such that the control plane resides centrally and the forwarding components remain distributed. The control plane interacts with bothnorthboundandsouthbound. In the northbound direction the control plane provides a common abstracted view of the network to higher-level applications and programs using high-level APIs and novel management paradigms, such as Intent-based networking. In the southbound direction the control plane programs the forwarding behavior of the data plane, using device level APIs of the physical network equipment distributed around the network. Thus, NFV is not dependent on SDN or SDN concepts, but NFV and SDN can cooperate to enhance the management of a NFV infrastructure and to create a more dynamic network environment. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of Network Services (NS), composed of different type of Network Functions (NF), such as Physical Network Functions (PNF) and VNFs, and placed between different geo-located NFV infrastructures, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems.[18] An NFV system needs a central orchestration and management system that takes operator requests associated with an NS or a VNF, translates them into the appropriate processing, storage and network configuration needed to bring the NS or VNF into operation. Once in operation, the VNF and the networks it is connected to potentially must be monitored for capacity and utilization, and adapted if necessary.[19] All network control functions in an NFV infrastructure can be accomplished using SDN concepts and NFV could be considered one of the primary SDN use cases in service provider environments.[20]For example, within each NFV infrastructure site, a VIM could rely upon an SDN controller to set up and configure the overlay networks interconnecting (e.g. VXLAN) the VNFs and PNFs composing an NS. The SDN controller would then configure the NFV infrastructure switches and routers, as well as the network gateways, as needed. Similarly, a Wide Area Infrastructure Manager (WIM) could rely upon an SDN controller to set up overlay networks to interconnect NSs that are deployed to different geo-located NFV infrastructures. It is also apparent that many SDN use-cases could incorporate concepts introduced in the NFV initiative. Examples include where the centralized controller is controlling a distributed forwarding function that could in fact be also virtualized on existing processing or routing equipment. NFV has proven a popular standard even in its infancy. Its immediate applications are numerous, such as virtualization ofmobile base stations,platform as a service(PaaS),content delivery networks(CDN), fixed access and home environments.[21]The potential benefits of NFV is anticipated to be significant. Virtualization of network functions deployed on general purpose standardized hardware is expected to reduce capital and operational expenditures, and service and product introduction times.[22][23]Many major network equipment vendors have announced support for NFV.[24]This has coincided with NFV announcements from major software suppliers who provide the NFV platforms used by equipment suppliers to build their NFV products.[25][26] However, to realize the anticipated benefits of virtualization, network equipment vendors are improving IT virtualization technology to incorporate carrier-grade attributes required to achievehigh availability, scalability, performance, and effective network management capabilities.[27]To minimize the total cost of ownership (TCO), carrier-grade features must be implemented as efficiently as possible. This requires that NFV solutions make efficient use of redundant resources to achieve five-nines availability (99.999%),[28]and of computing resource without compromising performance predictability. The NFV platform is the foundation for achieving efficient carrier-grade NFV solutions.[29]It is a software platform running on standard multi-core hardware and built using open source software that incorporates carrier-grade features. The NFV platform software is responsible for dynamically reassigning VNFs due to failures and changes in traffic load, and therefore plays an important role in achieving high availability. There are numerous initiatives underway to specify, align and promote NFV carrier-grade capabilities such as ETSI NFV Proof of Concept,[30]ATIS[31]Open Platform for NFV Project,[32]Carrier Network Virtualization Awards[33]and various supplier ecosystems.[34] The vSwitch, a key component of NFV platforms, is responsible for providing connectivity both VM-to-VM (between VMs) and between VMs and the outside network. Its performance determines both the bandwidth of the VNFs and the cost-efficiency of NFV solutions. The standardOpen vSwitch's (OVS) performance has shortcomings that must be resolved to meet the needs of NFVI solutions.[35]Significant performance improvements are being reported by NFV suppliers for both OVS and Accelerated Open vSwitch (AVS) versions.[36][37] Virtualization is also changing the wayavailabilityis specified, measured and achieved in NFV solutions. As VNFs replace traditional function-dedicated equipment, there is a shift from equipment-based availability to a service-based, end-to-end, layered approach.[38][39]Virtualizing network functions breaks the explicit coupling with specific equipment, therefore availability is defined by the availability of VNF services. Because NFV technology can virtualize a wide range of network function types, each with their own service availability expectations, NFV platforms should support a wide range of fault tolerance options. This flexibility enables CSPs to optimize their NFV solutions to meet any VNF availability requirement. ETSIhas already indicated that an important part of controlling the NFV environment be done through automated orchestration. NFV Management and Orchestration (NFV-MANO) refers to a set of functions within an NFV system to manage and orchestrate the allocation of virtual infrastructure resources to virtualized network functions (VNFs) and network services (NSs). They are the brains of the NFV system and a key automation enabler. The main functional blocks within the NFV-MANO architectural framework (ETSI GS NFV-006) are: The entry point in NFV-MANO for external operations support systems (OSS) and business support systems (BSS) is the NFVO, which is in charge of managing the lifecycle of NS instances. The management of the lifecycle of VNF instances constituting an NS instance is delegated by the NFVO to one more or VNFMs. Both the NFVO and the VNFMs uses the services exposed by one or more VIMs for allocating virtual infrastructure resources to the objects they manage. Additional functions are used for managing containerized VNFs: the Container Infrastructure Service Management (CISM) and the Container Image Registry (CIR) functions. The CISM is responsible for maintaining the containerized workloads while the CIR is responsible for storing and maintaining information of OS container software images The behavior of the NFVO and VNFM is driven by the contents of deployment templates (a.k.a. NFV descriptors) such as a Network Service Descriptor (NSD) and a VNF Descriptor (VNFD). ETSI delivers a full set of standardsenabling an open ecosystemwhere Virtualized Network Functions (VNFs) can be interoperable with independently developed management and orchestration systems, and where the components of a management and orchestration system are themselves interoperable. This includes a set ofRestful APIspecifications[40]as well as the specifications of a packaging format for delivering VNFs to service providers and of the deployment templates to be packaged with the software images to enable managing the lifecycle of VNFs. Deployment templates can be based onTOSCAorYANG.[41][42] AnOpenAPI(a.k.a. Swagger) representation of the API specifications is available and maintained on the ETSI forgeserver, along with TOSCA and YANG definition files to be used when creating deployment templates. The full set of published specifications is summarized in the table below. OS Container management and orchestration An overview of the different versions of the OpenAPI representations of NFV-MANO APIs is available on the ETSI NFVwiki. The OpenAPI files as well as the TOSCA YAML definition files and YANG modules applicable to NFV descriptors are available on the ETSIForge. Additional studies are ongoing within ETSI on possible enhancement to the NFV-MANO framework to improve its automation capabilities and introduce autonomous management mechanisms (seeETSI GR NFV-IFA 041) Recent performance study on NFV focused on the throughput, latency and jitter of virtualized network functions (VNFs), as well as NFV scalability in terms of the number of VNFs a single physical server can support.[43]Open source NFV platforms are available, one representative is openNetVM.[44]openNetVM is a high performance NFV platform based on DPDK and Docker containers. openNetVM provides a flexible framework for deploying network functions and interconnecting them to build service chains. openNetVM is an open source version of the NetVM platform described in NSDI 2014 and HotMiddlebox 2016 papers, released under the BSD license. The source code can be found at GitHub:openNetVM[45] From 2018, many VNF providers began to migrate many of their VNFs to a container-based architecture. Such VNFs also known asCloud-Native Network Functions(CNF) utilize many innovations deployed commonly on internet infrastructure. These include auto-scaling, supporting a continuous delivery / DevOps deployment model, and efficiency gains by sharing common services across platforms. Through service discovery and orchestration, a network based on CNFs will be more resilient to infrastructure resource failures. Utilizing containers, and thus dispensing with the overhead inherent in traditional virtualization through the elimination of theguest OScan greatly increase infrastructure resource efficiency.[46]
https://en.wikipedia.org/wiki/Network_functions_virtualization
ISO/IEC JTC 1, entitled "Information technology", is a joint technical committee (JTC) of theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC). Its purpose is to develop, maintain and promote standards in the fields ofinformation and communications technology(ICT). JTC 1 has been responsible for many critical IT standards, ranging from theJoint Photographic Experts Group(JPEG) image formats andMoving Picture Experts Group(MPEG) audio and video formats[a]to theCandC++ programming languages.[b] ISO/IEC JTC 1 was formed in 1987 as a merger between ISO/TC 97 (Information Technology) and IEC/TC 83, with IEC/SC 47B joining later. The intent was to bring together, in a single committee, the IT standardization activities of the two parent organizations in order to avoid duplicative or possibly incompatible standards. At the time of its formation, the mandate of JTC 1 was to develop base standards in information technology upon which other technical committees could build. This would allow for the development of domain and application-specific standards that could be applicable to specific business domains while also ensuring the interoperation and function of the standards on a consistent base.[2] In its first 15 years, JTC 1 brought about many standards in the information technology sector, including standards in the fields of multimedia (such asMPEG), IC cards (or "smart cards"),ICT security,programming languages, and character sets (such as theUniversal Character Set).[2][3]In the early 2000s, the organization expanded its standards development into fields such as security and authentication, bandwidth/connection management, storage and data management, software and systems engineering, service protocols, portable computing devices, and certain societal aspects such as data protection and cultural and linguistic adaptability. For more than 25 years, JTC 1 has provided a standards development environment where experts come together to develop worldwide Information and Communication Technology (ICT) standards for business and consumer applications. JTC 1 also addresses such critical areas asteleconferencingand e-meetings,cloud data management interface,biometricsin identity management, sensor networks forsmart gridsystems, and corporate governance of ICT implementation. As technologies converge, JTC 1 acts as a system integrator, especially in areas of standardization in which many consortia and forums are active. JTC 1 provides the standards approval environment for integrating diverse and complex ICT technologies. These standards rely upon the core infrastructure technologies developed by JTC 1 centers of expertise complemented by specifications developed in other organizations.[4][5]There are over 2,800 published JTC 1 standards developed by about 2,100 technical experts from around the world, some of which are freely available for download while others are available for a fee.[6][7] In 2008, Ms. Karen Higginbottom ofHPwas elected as chair.[8]In a 2013 interview, she described priorities, including cloud computing standards and adaptations of existing standards.[9]After Higginbottom's nine-year term expired in 2017, Mr. Phil Wennblom ofIntelwas elected as chair at the JTC 1 Plenary meeting inVladivostok, Russia. JTC 1 has implemented a process to transpose "publicly available specifications" (PAS) into international ISO/IEC standards. The PAS transposition process allows a PAS to be approved as an ISO/IEC standard in less than a year, as opposed to a full length process that can take up to 4 years. Consortia, such asOASIS,Trusted Computing Group(TCG),The Open Group,Object Management Group(OMG),W3C,Distributed Management Task Force(DMTF),Storage Networking Industry Association(SNIA),Open Geospatial Consortium(OGC),GS1, Spice User Group,Open Connectivity Foundation (OCF), NESMA,Society of Motion Picture and Television Engineers(SMPTE),Khronos Group, orJoint Development Foundationuse this process to transpose their specifications in an efficient manner into ISO/IEC standards.[10] The scope of ISO/IEC JTC 1 is "International standardization in the field of information technology". Its official mandate is to develop, maintain, promote and facilitate IT standards required by global markets meeting business and user requirements concerning: JTC 1 has a number of principles that guide standards development within the organization, which include:[11] Like its ISO and IEC parent organizations, members of JTC 1 are national standards bodies. One national standards body represents each member country, and the members are referred to within JTC 1 as "national bodies" (NBs). A member can either have participating (P-member) or observing (O-member) status, with the main differences being the ability to participate at theworking grouplevel in the drafting of standards and to vote on proposed standards (although O-members may submit comments). As of May 2021, JTC 1 has 35 P-members and 65 O-members, and thus 100 member NBs.[12]The secretariat of JTC 1 is theAmerican National Standards Institute(ANSI), which is the national standards body for the United States member NB. Other organizations can participate as Liaison Members, some of which are internal to ISO/IEC and some of which are external. Liaison relationships can be established at different levels within JTC 1 – i.e., at the JTC 1 level, the subcommittee level, or at the level of a specific working group within a subcommittee. Altogether, as of May 2021, there are about 120 external organizations that are in liaison with JTC 1 at one level or another.[13]The liaison relationships established directly at the JTC 1 level are:[citation needed] Most work on the development of standards is done by subcommittees (SCs), each of which deals with a particular field. Most of these subcommittees have severalworking groups(WGs). Subcommittees, working groups, special working groups (SWGs), and study groups (SGs) within JTC 1 are:[14] Each subcommittee can have subgroups created for specific purposes: Subcommittees can be created to deal with new situations (SC 37 was established in 2002; SC 38 in 2009; SC 39 in 2012; and SC 40 in 2013) or disbanded if the area of work is no longer relevant. There is no requirement for any member body to maintain status on any or all of the subcommittees.
https://en.wikipedia.org/wiki/ISO/IEC_JTC_1
fdiskis acommand-line utilityfordisk partitioning. It has been part ofDOS,DRFlexOS,IBMOS/2, and early versions ofMicrosoft Windows, as well as certain ports ofFreeBSD,[2]NetBSD,[3]OpenBSD,[4]DragonFly BSD[5]andmacOS[6]for compatibility reasons.Windows 2000and its successors have replaced fdisk with a more advanced tool calleddiskpart. IBMintroduced the first version of fdisk (officially dubbed "Fixed Disk Setup Program") in March 1983, with the release of theIBM PC/XTcomputer (the first PC to store data on ahard disk) and theIBM PC DOS2.0 operating system. fdisk version 1.0 can create oneFAT12partition, delete it, change theactive partition, or display partition data. fdisk writes themaster boot record, which supports up to four partitions. The other three were intended for other operating systems such asCP/M-86andXenix, which were expected to have their own partitioning utilities. Microsoft first added fdisk toMS-DOSin version 3.2.[7]MS-DOS versions 2.0 through 3.10 included OEM-specific partitioning tools, which may have been named fdisk. PC DOS 3.0, released in August 1984, added support forFAT16partitions to handle larger hard disks more efficiently. PC DOS 3.30, released in April 1987, added support forextended partitions. (These partitions do not store data directly but can contain up to 23logical drives.) In both cases, fdisk was modified to work with FAT16 and extended partitions. Support forFAT16Bwas first added to Compaq's fdisk in MS-DOS 3.31. FAT16B later became available with MS-DOS and PC DOS 4.0. The undocumented/mbrswitch in fdisk, which could repair themaster boot record, soon became popular. IBM PC DOS 7.10 shipped with the new fdisk32 utility. ROM-DOS,[8]DR DOS 6.0[9]FlexOS,[10]PTS-DOS2000 Pro,[11]andFreeDOS,[12]include an implementation of the fdisk command. Windows 95,Windows 98, andWindows MEshipped with a derivative of the MS-DOS fdisk.Windows 2000and its successors, however, came with the more advanced[according to whom?]diskpartand the graphicalDisk Managementutilities. Starting with Windows 95 OSR2, fdisk supports theFAT32file system.[13] The version of fdisk that ships with Windows 95 does not report the correct size of a hard disk that is larger than 64 GB. An updated fdisk is available from Microsoft to correct this issue.[14]In addition, fdisk cannot create partitions larger than 512 GB, even though FAT32 supports partitions as big as 2 TB. This limitation applies to all versions of fdisk supplied with Windows 95 OSR 2.1, Windows 98 and Windows ME. Before version 4.0,OS/2shipped with two partition table managers. These were thetext modefdisk[15]and thegraphicalfdiskpm.[16]The two have identical functionality, and can manipulate both FAT partitions and the more advancedHPFSpartitions. OS/2 4.5 and higher (includingeComStationandArcaOS) can use theJFSfile system, as well as FAT and HPFS. They replaced fdisk with theLogical Volume Manager(LVM). fdisk forMach Operating Systemwas written by Robert Baron. It was ported to386BSDby Julian Elischer,[17]and the implementation is being used byFreeBSD,[2]NetBSD[3]andDragonFly BSD,[5]all as of 2019, as well as the early versions ofOpenBSDbetween 1995 and 1997 before OpenBSD 2.2.[1] Tobias Weingartner re-wrote fdisk in 1997 before OpenBSD 2.2,[4]which has subsequently been forked byApple Computer, Incin 2002, and is still used as the basis for fdisk on macOS as of 2019.[6] For native partitions, BSD systems traditionally useBSD disklabel, and fdisk partitioning is supported only on certain architectures (for compatibility reasons) and only in addition to the BSD disklabel (which is mandatory). In Linux, fdisk is a part of a standard package distributed by the Linux Kernel organization,util-linux. The original program was written by Andries E. Brouwer and A. V. Le Blanc and was later rewritten by Karel Zak and Davidlohr Bueso when they forked the util-linux package in 2006. An alternative,ncurses-based program,cfdisk, allows users to create partition layouts via atext-based user interface(TUI).[18]
https://en.wikipedia.org/wiki/FDISK
Pathological scienceis an area of research where "people are tricked into false results ... by subjective effects,wishful thinkingor threshold interactions."[1][2]The term was first used byIrving Langmuir,Nobel Prize-winningchemist, during a 1953colloquiumat theKnolls Research Laboratory.[3]Langmuir said a pathological science is an area of research that simply will not "go away"—long after it was given up on as "false" by the majority of scientists in the field. He called pathological science "the science of things that aren't so."[4][5] In his 2002 book,Undead Science, sociology and anthropology Professor Bart Simon lists it among practices that are falsely perceived or presented to be science, "categories ... such as ...pseudoscience,amateur science, deviant or fraudulent science, bad science,junk science, pathological science,cargo cult science, andvoodoo science."[6]Examples of pathological science include theMartian canals,N-rays,polywater, andcold fusion. The theories and conclusions behind all of these examples are currently rejected or disregarded by the majority of scientists. Pathological science, as defined by Langmuir, is a psychological process in which a scientist, originally conforming to thescientific method, unconsciously veers from that method, and begins a pathological process of wishful data interpretation(see theobserver-expectancy effectandcognitive bias). Some characteristics of pathological science are: Langmuir never intended the term to be rigorously defined; it was simply the title of his talk on some examples of "weird science". As with any attempt to define the scientific endeavor, examples and counterexamples can always be found. Langmuir's discussion ofN-rayshas led to their traditional characterization as an instance of pathological science.[7] In 1903,Prosper-René Blondlotwas working onX-rays(as were other physicists of the era) and noticed a new visible radiation that could penetratealuminium. He devised experiments in which a barely visible object was illuminated by these N-rays, and thus became "more visible". Blondlot claimed that N-rays were causing a small visual reaction, too small to be seen under normal illumination, but just visible when most normal light sources were removed and the target was just barely visible to begin with. N-rays became the topic of some debate within the science community. After a time, American physicistRobert W. Wooddecided to visit Blondlot's lab, which had moved on to the physical characterization of N-rays. An experiment passed the rays from a 2 mm slit through an aluminumprism, from which he was measuring theindex of refractionto a precision that required measurements accurate to within 0.01 mm. Wood asked how it was possible that he could measure something to 0.01 mm from a 2 mm source, a physical impossibility in the propagation of any kind of wave. Blondlot replied, "That's one of the fascinating things about the N-rays. They don't follow the ordinary laws of science that you ordinarily think of." Wood then asked to see the experiments being run as usual, which took place in a room required to be very dark so the target was barely visible. Blondlot repeated his most recent experiments and got the same results—despite the fact that Wood had reached over and covertly sabotaged the N-ray apparatus by removing the prism.[1][8] Langmuir offered additional examples of what he regarded as pathological science in his original speech:[9] A 1985 version[citation needed]of Langmuir's speech offered more examples, although at least one of these (polywater) occurred entirely after Langmuir's death in 1957: Since Langmuir's original talk, a number of newer examples of what appear to be pathological science have appeared.Denis Rousseau, one of the main debunkers of polywater, gave an update of Langmuir in 1992, and he specifically cited as examples the cases of polywater,Martin Fleischmann'scold fusion andJacques Benveniste's"infinite dilution".[20] Polywaterwas a form of water which appeared to have a much higherboiling pointand much lowerfreezing pointthan normal water. During the 1960s, a number of articles were published on the subject, and research on polywater was done around the world with mixed results. Eventually it was determined that some of the properties of polywater could be explained by biological contamination. When more rigorous cleaning ofglasswareandexperimental controlswere introduced, polywater could no longer be produced. It took several years for the concept of polywater to die in spite of the later negative results. In 1989,Martin FleischmannandStanley Ponsannounced the discovery of a simple and cheap procedure to obtain room-temperaturenuclear fusion. Although there were multiple instances where successful results were reported, they lacked consistency and hence cold fusion came to be considered to be an example of pathological science.[21]Two panels convened by theUS Department of Energy, one in 1989 and a second in 2004, did not recommend a dedicated federal program for cold fusion research. A small number of researchers continue working in the field. Jacques Benveniste was a Frenchimmunologistwho in 1988 published a paper in the prestigious scientific journalNaturedescribing the action of high dilutions ofanti-IgE antibodyon thedegranulationof humanbasophils, findings which seemed to support the concept ofhomeopathy. Biologists were puzzled by Benveniste's results, as only molecules of water, and no molecules of the original antibody, remained in these high dilutions. Benveniste concluded that the configuration of molecules in water was biologically active. Subsequent investigations have not supported Benveniste's findings.
https://en.wikipedia.org/wiki/Pathological_science
Data integrityis the maintenance of, and the assurance of, data accuracy and consistency over its entirelife-cycle.[1]It is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The term is broad in scope and may have widely different meanings depending on the specific context even under the same general umbrella ofcomputing. It is at times used as a proxy term fordata quality,[2]whiledata validationis a prerequisite for data integrity.[3] Data integrity is the opposite ofdata corruption.[4]The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities). Moreover, upon laterretrieval, ensure the data is the same as when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused withdata security, the discipline of protecting data from unauthorized parties. Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, andhuman error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in alife-critical system. Physical integrity deals with challenges which are associated with correctly storing and fetching the data itself. Challenges with physical integrity may includeelectromechanicalfaults, design flaws, materialfatigue,corrosion,power outages, natural disasters, and other special environmental hazards such asionizing radiation, extreme temperatures, pressures andg-forces. Ensuring physical integrity includes methods such asredundanthardware, anuninterruptible power supply, certain types ofRAIDarrays,radiation hardenedchips,error-correcting memory, use of aclustered file system, using file systems that employ block levelchecksumssuch asZFS, storage arrays that compute parity calculations such asexclusive oror use acryptographic hash functionand even having awatchdog timeron critical subsystems. Physical integrity often makes extensive use of error detecting algorithms known aserror-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as theDamm algorithmorLuhn algorithm. These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcription errors can be detected throughhash functions. In production systems, these techniques are used together to ensure various degrees of data integrity. For example, a computerfile systemmay be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and preventsilent data corruption. As another example, a database management system might be compliant with theACIDproperties, but the RAID controller or hard disk drive's internal write cache might not be. This type of integrity is concerned with thecorrectnessorrationalityof a piece of data, given a particular context. This includes topics such asreferential integrityandentity integrityin arelational databaseor correctly ignoring impossible sensor data in robotic systems. These concerns involve ensuring that the data "makes sense" given its environment. Challenges includesoftware bugs, design flaws, and human errors. Common methods of ensuring logical integrity include things such ascheck constraints,foreign key constraints, programassertions, and other run-time sanity checks. Physical and logical integrity often share many challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own. If a data sector only has a logical error, it can be reused by overwriting it with new data. In case of a physical error, the affected data sector is permanently unusable. Data integrity contains guidelines fordata retention, specifying or guaranteeing the length of time data can be retained in a particular database (typically arelational database). To achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of enforcement could cause errors in the data. Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system. Strict enforcement of data integrity rules results in lower error rates, and time saved troubleshooting and tracing erroneous data and the errors it causes to algorithms. Data integrity also includes rules defining the relations a piece of data can have to other pieces of data, such as aCustomerrecord being allowed to link to purchasedProducts, but not to unrelated data such asCorporate Assets. Data integrity often includes checks and correction for invalid data, based on a fixedschemaor a predefined set of rules. An example being textual data entered where a date-time value is required. Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived. Data integrity is normally enforced in adatabase systemby a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of therelational data model: entity integrity, referential integrity and domain integrity. If a database supports these features, it is the responsibility of the database to ensure data integrity as well as theconsistency modelfor the data storage and retrieval. If a database does not support these features, it is the responsibility of the applications to ensure data integrity while the database supports theconsistency modelfor the data storage and retrieval. Having a single, well-controlled, and well-defined data-integrity system increases: Moderndatabasessupport these features (seeComparison of relational database management systems), and it has become the de facto responsibility of the database to ensure data integrity. Companies, and indeed many database systems, offer products and services to migrate legacy systems to modern databases. An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or more related child records all of the referential integrity processes are handled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can exist without a parent (also called being orphaned) and that no parent loses their child records. It also ensures that no parent record can be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each application. Various research results show that neither widespreadfilesystems(includingUFS,Ext,XFS,JFSandNTFS) norhardware RAIDsolutions provide sufficient protection against data integrity problems.[5][6][7][8][9] Some filesystems (includingBtrfsandZFS) provide internal data andmetadatachecksumming that is used for detectingsilent data corruptionand improving data integrity. If a corruption is detected that way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a transparent way.[10]This approach allows improved data integrity protection covering the entire data paths, which is usually known asend-to-end data protection.[11]
https://en.wikipedia.org/wiki/Data_integrity
Privacy laws of the United Statesdeal with several differentlegalconcepts. One is theinvasion of privacy, atortbased in common law allowing an aggrieved party to bring a lawsuit against an individual who unlawfully intrudes into their private affairs, discloses their private information, publicizes them in a false light, or appropriates their name for personal gain.[1] The essence of the law derives from aright to privacy, defined broadly as "the right to be let alone". It usually excludes personal matters or activities which may reasonably be of public interest, like those of celebrities or participants in newsworthy events. Invasion of the right to privacy can be the basis for a lawsuit for damages against the person or entity violating the right. These include theFourth Amendmentright to be free of unwarranted search or seizure, theFirst Amendmentright to free assembly, and theFourteenth Amendmentdue process right, recognized by theSupreme Court of the United Statesas protecting a general right to privacy within family, marriage, motherhood, procreation, and child rearing.[2][3] Attempts to improve consumer privacy protections in the U.S. in the wake of the2017 Equifax data breach, which affected 145.5 million U.S. consumers, failed to pass in Congress.[4] The early years in the development of privacy rights began withEnglish common law, protecting "only the physical interference of life and property".[5]The Castle doctrine analogizes a person's home to their castle – a site that is private and should not be accessible without permission of the owner. The development of tort remedies by the common law is "one of the most significant chapters in the history of privacy law".[6]Those rights expanded to include a "recognition of man's spiritual nature, of his feelings and his intellect." Eventually, the scope of those rights broadened even further to include a basic "right to be let alone," and the former definition of "property" would then comprise "every form of possession – intangible, as well as tangible." By the late 19th century, interest in privacy grew as a result of the growth of print media, especially newspapers.[6] Between 1850 and 1890, U.S. newspaper circulation grew by 1,000 percent – from 100 papers with 800,000 readers to 900 papers with more than 8 million readers.[6]In addition, newspaper journalism became more sensationalized, and was termedyellow journalism. The growth of industrialism led to rapid advances in technology, including the handheld camera, as opposed to earlierstudio cameras, which were much heavier and larger. In 1884,Eastman Kodakcompany introduced theirKodak Brownie, and it became amass marketcamera by 1901, cheap enough for the general public. This allowed people and journalists to take candid snapshots in public places for the first time. Privacy was dealt with at the state level. For example,Pavesich v. New England Life Insurance Company(in 1905) was one of the first specific endorsements of the right to privacy as derived fromnatural lawin US law. Judith Wagner DeCew stated, "Pavesichwas the first case to recognize privacy as a right in tort law by invoking natural law, common law, and constitutional values."[7] Samuel D. WarrenandLouis D. Brandeis, partners in a new law firm, feared that this new small camera technology would be used by the "sensationalistic press." Seeing this becoming a likely challenge to individual privacy rights, they wrote the "pathbreaking"[6]Harvard Law Reviewarticle in 1890, "The Right to Privacy".[8]According to legal scholarRoscoe Pound, the article did "nothing less than add a chapter to our law",[9]and in 1966 legal textbook author,Harry Kalven, hailed it as the "most influential law review article of all".[6]In the Supreme Court case ofKyllo v. United States, 533 U.S. 27 (2001), the article was cited by a majority of justices, both those concurring and those dissenting.[6] The development of the doctrine regarding the tort of "invasion of privacy" was largely spurred by the Warren and Brandeis article, "The Right to Privacy". In it, they explain why they wrote the article in its introduction: "Political, social, and economic changes entail the recognition of new rights, and the common law, in its eternal youth, grows to meet the demands of society".[8]More specifically, they also shift their focus on newspapers: The press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip is no longer the resource of the idle and of the vicious, but has become a trade, which is pursued with industry as well as effrontery. To satisfy a prurient taste the details of sexual relations are spread broadcast in the columns of the daily papers. ... The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.[8] They then clarify their goals: "It is our purpose to consider whether the existing law affords a principle which can properly be invoked to protect the privacy of the individual; and, if it does, what the nature and extent of such protection is".[8] Warren and Brandeis write that privacy rights should protect both businesses and private individuals. They describe rights intrade secretsand unpublished literary materials, regardless whether those rights are invaded intentionally or unintentionally, and without regard to any value they may have. For private individuals, they try to define how to protect "thoughts, sentiments, and emotions, expressed through the medium of writing or of the arts". They describe such things as personal diaries and letters needing protection, and how that should be done: "Thus, the courts, in searching for some principle upon which the publication of private letters could be enjoined, naturally came upon the ideas of abreach of confidence, and of animplied contract". They also define this as a breach of trust, where a person has trusted that another will not publish their personal writings, photographs, or artwork, without their permission, including any "facts relating to his private life, which he has seen fit to keep private". And recognizing that technological advances will become more relevant, they write: "Now that modern devices afford abundant opportunities for the perpetration of such wrongs without any participation by the injured party, the protection granted by the law must be placed upon a broader foundation".[8] There have been many laws related to privacy and data protection in recent years that have been enforced as a result of the rapid technological advancements. However, critics and scholars have argued that these guidelines are usually focused on legal factors, rather than technical details, which make it difficult for engineers and developers to ensure that new designs meet the guidelines stated in privacy laws.[10] In the United States,"invasion of privacy" is a commonly usedcause of actionin legalpleadings. Moderntort law, as first categorized byWilliam Prosser, includes four categories of invasion of privacy:[11] Intrusion of solitude occurs where one person intrudes upon the private affairs of another. In a famous case from 1944, authorMarjorie Kinnan Rawlingswas sued by Zelma Cason, who was portrayed as a character in Rawlings' acclaimed memoir,Cross Creek.[12]TheFlorida Supreme Courtheld that a cause of action for invasion of privacy was supported by the facts of the case, but in a later proceeding found that there were no actual damages. Intrusion upon seclusionoccurs when a perpetrator intentionally intrudes, physically, electronically, or otherwise, upon the private space, solitude, or seclusion of a person, or the private affairs or concerns of a person, by use of the perpetrator's physical senses or by electronic device or devices to oversee or overhear the person's private affairs, or by some other form of investigation, examination, or observation intrude upon a person's private matters if the intrusion would be highly offensive to a reasonable person. Hacking into someone else's computer is a type of intrusion upon privacy,[13]as is secretly viewing or recording private information by still or video camera.[14]In determining whether intrusion has occurred, one of three main considerations may be involved:expectation of privacy; whether there was an intrusion, invitation, or exceedance of invitation; or deception, misrepresentation, or fraud to gain admission. Intrusion is "an information-gathering, not a publication, tort ... legal wrong occurs at the time of the intrusion. No publication is necessary".[15] Restrictions against the invasion of privacy encompasses journalists as well: The First Amendment has never been construed to accord newsmen immunity from torts or crimes committed during the course of newsgathering. The First Amendment is not a license to trespass, to steal, or to intrude by electronic means into the precincts of another's home or office.[15][16] Public disclosure of private facts arises where one person reveals information which is not of public concern, and the release of which would offend a reasonable person.[17]"Unlike libel or slander, truth is not a defense for invasion of privacy."[13]Disclosure of private facts includes publishing or widespread dissemination of little-known, private facts that are non-newsworthy, not part of public records, public proceedings, not of public interest, and would be offensive to a reasonable person if made public.[15] False light is alegalterm that refers to atortconcerningprivacythat is similar to the tort ofdefamation. For example, the privacy laws in the United States include anon-public person'sright to privacy frompublicitywhich creates an untrue or misleading impression about them. A non-public person's right to privacy from publicity is balanced against theFirst Amendmentright offree speech. False lightlawsare "intended primarily to protect theplaintiff'smentaloremotionalwell-being".[18]If apublicationofinformationisfalse, then a tort ofdefamationmight have occurred. If thatcommunicationis nottechnicallyfalse but is stillmisleading, then a tort of false light might have occurred.[18] The specific elements of the Tort of false light vary considerably even among thosejurisdictionswhich do recognize this tort. Generally, these elements consist of the following: Thus in general, the doctrine of false light holds: One who gives publicity to a matter concerning another before the public in a false light is subject to liability to the other for invasion of privacy, if (a) the false light in which the other was placed would be highly offensive to a reasonable person, and (b) the actor had knowledge of or acted in a reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.[19] For this wrong, money damages may be recovered from the first person by the other. At first glance, this may appear to be similar todefamation(libel and slander), but the basis for the harm is different, and the remedy is different in two respects. First, unlike libel and slander, no showing of actual harm or damage to the plaintiff is usually required in false light cases, and the court will determine the amount of damages. Second, being a violation of a Constitutional right of privacy, there may be no applicable statute of limitations in some jurisdictions specifying a time limit within which period a claim must be filed. Consequently, although it is infrequently invoked, in some cases false light may be a more attractive cause of action for plaintiffs than libel or slander, because the burden of proof may be less onerous. What does "publicity" mean? A newspaper of general circulation (or comparable breadth) or as few as 3–5 people who know the person harmed? Neither defamation nor false light has ever required everyone in society be informed by a harmful act, but the scope of "publicity" is variable. In some jurisdictions, publicity "means that the matter is made public, by communicating it to the public at large, or to so many persons that the matter must be regarded as substantially certain to become one of public knowledge."[20] Moreover, the standards of behavior governing employees of government institutions subject to a state or national Administrative Procedure Act (as in the United States) are often more demanding than those governing employees of private or business institutions like newspapers. A person acting in an official capacity for a government agency may find that their statements are not indemnified by the principle of agency, leaving them personally liable for any damages. Example: If someone's reputation was portrayed in a false light during a personnel performance evaluation in a government agency or public university, one might be wronged if only a small number initially learned of it, or if adverse recommendations were made to only a few superiors (by a peer committee to department chair, dean, dean's advisory committee, provost, president, etc.). Settled cases suggest false light may not be effective in private school personnel cases,[21]but they may be distinguishable from cases arising in public institutions. Although privacy is often a common-law tort, most states have enacted statutes that prohibit the use of a person's name or image if used without consent for the commercial benefit of another person.[22] Appropriation of name or likeness occurs when a person uses the name or likeness of another person for personal gain or commercial advantage. Action for misappropriation of right of publicity protects a person against loss caused by appropriation of personal likeness for commercial exploitation. A person's exclusive rights to control their name and likeness to prevent others from exploiting without permission is protected in similar manner to a trademark action with the person's likeness, rather than the trademark, being the subject of the protection.[13] Appropriation is the oldest recognized form of invasion of privacy involving the use of an individual's name, likeness, or identity without consent for purposes such as ads, fictional works, or products.[15] "The same action – appropriation – can violate either an individual's right of privacy or right of publicity. Conceptually, however, the two rights differ."[15] TheFair Credit Reporting Actbecame effective on April 25, 1971, and implemented limitations on the information that could be collected, stored, and utilized by agencies such as credit bureaus, tenant screenings, and health agencies. The law also defined the rights granted to individuals in regards to their financial information including the right to obtain a credit score; the right to know what information is in your financial file; the right to know when your information is being accessed and used; and the right to dispute any inaccurate or incorrect information.[23] TheVideo Privacy Protection Actof 1988 (VPPA) was signed into law by PresidentRonald Reaganto preserve the privacy of people's information collected when they rented, purchased, or delivered audio visual materials, and specifically videotapes.[24]The law arose out of theBork tapescontroversy surrounding theWashington City Paper's publication of a list of films rented byRobert Bork, aU.S. District of Columbia Circuit Court of AppealsJudge who had beennominated to fill a seat on the United States Supreme Courtat the time.[25]The law prohibits the disclosure of personal information collected by video tape service providers unless it falls under certain exceptions.[26]The VPPA became a focus of attention in the legal industry once again around 2022. Its revival came as part of a larger trend in consumerclass actionsfiled based on privacy law violations, both through new laws like theCalifornia Consumer Privacy Actand older laws like the VPPA and wiretapping statutes. Signed in law on August 21, 1996,Health Insurance Portability and Accountability Act(HIPAA) is a piece of legislation passed in the United States that limits the amount and types of information that can be collected and stored by healthcare providers. This includes limits on how that information can be obtained, stored, and released.[27]HIPAA also developed data confidentiality requirements that are a part of "The Privacy Rule."[28] TheGramm-Leach-Bliley Act(GLA) is a federal law that was signed into effect on November 12, 1999. This act placed increased limits and requirements for data collection by financial institutions, as well as limited how that information could be collected and stored. It focused on requiring financial institutions to take specific measure to increase the safety and confidentiality of the information being collected. In addition to this, the law also put limitations on what type of data could be collected by financial institutions and how they could use that information.[27]The act strives to protect NPI, or nonpublic personal information, which is any information that is collected regarding an individual's finances that is not otherwise publicly available.[28] TheChildren's Online Privacy Protection Act(COPPA), passed on April 21, 2000, is a federal law in the United States that puts severe restrictions on what data companies can collect, share, or sell about children who are under the age of 13.[29]A core provision under COPPA is that a website operator must "obtain verifiable parental consent before any collection, use, or disclosure of personal information from children."[30] Although the word "privacy" is actually never used in the text of theUnited States Constitution,[31]there are Constitutional limits to the government's intrusion into individuals' right to privacy. This is true even when pursuing a public purpose such as exercising police powers or passing legislation. The Constitution, however, only protects againststate actors. Invasions of privacy by individuals can only be remedied under previous court decisions. TheFirst Amendmentprotects the right to free assembly, broadening privacy rights. TheFourth Amendment to the Constitution of the United Statesensures that "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." The Fourth Amendment was theFramers' attempt to protect each citizen's spiritual and intellectual integrity.[citation needed]A government that violates the Fourth Amendment in order to use evidence against a citizen is also violating theFifth Amendment.[32]TheNinth Amendmentdeclares that "The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people." TheSupreme Courthas interpreted theFourteenth Amendmentas providing a substantive due process right to privacy. This was first affirmed by several Supreme Court Justices inGriswold v. Connecticut, a 1965 decision protecting a married couple's rights to contraception. InRoe v. Wade(1973), the Supreme Court invoked a "right to privacy" as creating a right to an abortion, sparking a lasting nationwide debate on the meaning of the term "right to privacy". InLawrence v. Texas(2003), the Supreme Court invoked the right to privacy regarding the sexual practices of same-sex couples. However, due toDobbs v. Jackson Women's Health Organization(2022) breaking many precedents set byGriswoldandRoe, the privacy interpretations brought about specifically by these cases are currently of ambiguous legal force.[citation needed] On August 22, 1972, the Alaska Right of Privacy Amendment, Amendment 3, was approved with 86% of the vote in support of the legislatively referred constitutional amendment.[33]Article I, Section 22 of Alaska's constitution states, "The right of the people to privacy is recognized and shall not be infringed. The legislature shall implement this section."[34] TheCalifornia Constitutionarticulates privacy as aninalienable right.[35] CA SB 1386 expands on privacy law and guarantees that if a company exposes a Californian's sensitive information this exposure must be reported to the citizen. This law has inspired many states to come up with similar measures.[36] California's"Shine the Light" law(SB 27, CA Civil Code § 1798.83), operative on January 1, 2005, outlines specific rules regarding how and when a business must disclose use of a customer'spersonal informationand imposes civil damages for violation of the law. California's Reader Privacy Act was passed into law in 2011.[37]The law prohibits a commercial provider of a book service, as defined, from disclosing, or being compelled to disclose, any personal information relating to a user of the book service, subject to certain exceptions. The bill would require a provider to disclose personal information of a user only if a court order has been issued, as specified, and certain other conditions have been satisfied. The bill would impose civil penalties on a provider of a book service for knowingly disclosing a user's personal information to a government entity in violation of these provisions. This law is applicable to electronic books in addition to print books.[38] The California Privacy Rights Act created theCalifornia Privacy Protection Agency, the first data protection agency in the United States.[39][40] Article I, §23 of theFlorida Constitutionstates that "Every natural person has the right to be let alone and free from governmental intrusion into the person's private life except as otherwise provided herein. This section shall not be construed to limit the public's right of access to public records and meetings as provided by law."[41] Article 2, §10 of theMontana Constitutionstates that "The right of individual privacy is essential to the well-being of a free society and shall not be infringed without the showing of a compelling state interest".[42] Article 1, §7 of theWashington Constitutionstates that "No person shall be disturbed in his private affairs, or his home invaded, without authority of law".[43] The right to privacy is protected also by more than 600 laws in the states and by a dozen federal laws, like those protecting health and student information, also limiting electronic surveillance.[46] As of 2022 however, only five states had data privacy laws.[47] Several of the US federal privacy laws have substantial "opt-out" requirements, requiring that the individual specifically opt-out of commercial dissemination ofpersonally identifiable information (PII). In some cases, an entity wishing to "share" (disseminate) information is required to provide a notice, such as aGLBAnotice or aHIPAAnotice, requiring individuals to specifically opt-out.[48]These "opt-out" requests may be executed either by use of forms provided by the entity collecting the data, with or without separate written requests. The Health Information Technology for Economic and Clinical Health Act (HITECH Act) is an important piece of legislation in the United States that relates to the privacy of health-related information. Enacted as part of the American Recovery and Reinvestment Act of 2009, the HITECH Act addresses the privacy and security concerns associated with the electronic transmission of health information.
https://en.wikipedia.org/wiki/Privacy_laws_of_the_United_States
Instatisticsandsignal processing, the method ofempirical orthogonal function(EOF) analysis is a decomposition of asignalor data set in terms oforthogonalbasis functionswhich are determined from the data. The term is also interchangeable with the geographically weightedPrincipal components analysisingeophysics.[1] Theithbasis function is chosen to be orthogonal to the basis functions from the first throughi− 1, and to minimize the residualvariance. That is, the basis functions are chosen to be different from each other, and to account for as much variance as possible. The method of EOF analysis is similar in spirit toharmonic analysis, but harmonic analysis typically uses predetermined orthogonal functions, for example, sine and cosine functions at fixedfrequencies. In some cases the two methods may yield essentially the same results. The basis functions are typically found by computing theeigenvectorsof thecovariance matrixof the data set. A more advanced technique is to form akernelout of the data, using a fixedkernel. The basis functions from the eigenvectors of the kernel matrix are thus non-linear in the location of the data (seeMercer's theoremand thekernel trickfor more information).
https://en.wikipedia.org/wiki/Empirical_orthogonal_functions
Aconfidence bandis used instatistical analysisto represent the uncertainty in an estimate of a curve or function based on limited or noisy data. Similarly, aprediction bandis used to represent the uncertainty about the value of a new data-point on the curve, but subject to noise. Confidence and prediction bands are often used as part of the graphical presentation of results of aregression analysis. Confidence bands are closely related toconfidence intervals, which represent the uncertainty in an estimate of a single numerical value. "As confidence intervals, by construction, only refer to a single point, they are narrower (at this point) than a confidence band which is supposed to hold simultaneously at many points."[1] Suppose our aim is to estimate a functionf(x). For example,f(x) might be the proportion of people of a particular agexwho support a given candidate in an election. Ifxis measured at the precision of a single year, we can construct a separate 95% confidence interval for each age. Each of these confidence intervals covers the corresponding true valuef(x) with confidence 0.95. Taken together, these confidence intervals constitute a95% pointwise confidence bandforf(x). In mathematical terms, a pointwise confidence bandf^(x)±w(x){\displaystyle {\hat {f}}(x)\pm w(x)}with coverage probability 1 −αsatisfies the following condition separately for each value ofx: wheref^(x){\displaystyle {\hat {f}}(x)}is the point estimate off(x). Thesimultaneous coverage probabilityof a collection of confidence intervals is theprobabilitythat all of them cover their corresponding true values simultaneously. In the example above, the simultaneous coverage probability is the probability that the intervals forx= 18,19,... all cover their true values (assuming that 18 is the youngest age at which a person can vote). If each interval individually has coverage probability 0.95, the simultaneous coverage probability is generally less than 0.95. A95% simultaneous confidence bandis a collection of confidence intervals for all valuesxin the domain off(x) that is constructed to have simultaneous coverage probability 0.95. In mathematical terms, a simultaneous confidence bandf^(x)±w(x){\displaystyle {\hat {f}}(x)\pm w(x)}with coverage probability 1 −αsatisfies the following condition: In nearly all cases, a simultaneous confidence band will be wider than a pointwise confidence band with the same coverage probability. In the definition of a pointwise confidence band, thatuniversal quantifiermoves outside the probability function. Confidence bands commonly arise inregression analysis.[2]In the case of a simple regression involving a single independent variable, results can be presented in the form of a plot showing the estimated regression line along with either point-wise or simultaneous confidence bands. Commonly used methods for constructing simultaneous confidence bands in regression are theBonferroniandScheffémethods; seeFamily-wise error rate controlling proceduresfor more. Confidence bands can be constructed around estimates of theempirical distribution function. Simple theory allows the construction of point-wise confidence intervals, but it is also possible to construct a simultaneous confidence band for the cumulative distribution function as a whole byinverting the Kolmogorov-Smirnov test, or by using non-parametric likelihood methods.[3] Confidence bands arise whenever a statistical analysis focuses on estimating a function. Confidence bands have also been devised for estimates ofdensity functions,spectral densityfunctions,[4]quantile functions,scatterplot smooths,survival functions, andcharacteristic functions.[citation needed] Prediction bands are related toprediction intervalsin the same way that confidence bands are related to confidence intervals. Prediction bands commonly arise in regression analysis. The goal of a prediction band is to cover with a prescribed probability the values of one or more future observations from the same population from which a given data set was sampled. Just as prediction intervals are wider than confidence intervals, prediction bands will be wider than confidence bands. In mathematical terms, a prediction bandf^(x)±w(x){\displaystyle {\hat {f}}(x)\pm w(x)}with coverage probability 1 −αsatisfies the following condition for each value ofx: wherey*is an observation taken from the data-generating process at the given pointxthat is independent of the data used to construct the point estimatef^(x){\displaystyle {\hat {f}}(x)}and the confidence[vague]intervalw(x). This is a pointwise prediction interval.[vague]It would be possible to construct a simultaneous interval[vague]for a finite number of independent observations using, for example, the Bonferroni method to widen the interval[vague]by an appropriate amount.
https://en.wikipedia.org/wiki/Confidence_and_prediction_bands
Inparallel computing, anembarrassingly parallelworkload or problem (also calledembarrassingly parallelizable,perfectly parallel,delightfully parallelorpleasingly parallel) is one where little or no effort is needed to split the problem into a number of parallel tasks.[1]This is due to minimal or no dependency upon communication between the parallel tasks, or for results between them.[2] These differ fromdistributed computingproblems, which need communication between tasks, especially communication of intermediate results. They are easier to perform onserver farmswhich lack the special infrastructure used in a truesupercomputercluster. They are well-suited to large, Internet-basedvolunteer computingplatforms such asBOINC, and suffer less fromparallel slowdown. The opposite of embarrassingly parallel problems areinherently serial problems, which cannot be parallelized at all. A common example of an embarrassingly parallel problem is 3D video rendering handled by agraphics processing unit, where each frame (forward method) or pixel (ray tracingmethod) can be handled with no interdependency.[3]Some forms ofpassword crackingare another embarrassingly parallel task that is easily distributed oncentral processing units,CPU cores, or clusters. "Embarrassingly" is used here to refer to parallelization problems which are "embarrassingly easy".[4]The term may imply embarrassment on the part of developers or compilers: "Because so many important problems remain unsolved mainly due to their intrinsic computational complexity, it would be embarrassing not to develop parallel implementations of polynomialhomotopycontinuation methods."[5]The term is first found in the literature in a 1986 book on multiprocessors byMATLAB's creatorCleve Moler,[6]who claims to have invented the term.[7] An alternative term,pleasingly parallel, has gained some use, perhaps to avoid the negative connotations of embarrassment in favor of a positive reflection on the parallelizability of the problems: "Of course, there is nothing embarrassing about these programs at all."[8] A trivial example involves serving static data. It would take very little effort to have many processing units produce the same set of bits. Indeed, the famousHello Worldproblem could easily be parallelized with few programming considerations or computational costs. Some examples of embarrassingly parallel problems include:
https://en.wikipedia.org/wiki/Embarrassingly_parallel
George Spencer-Brown(2 April 1923 – 25 August 2016) was an Englishpolymathbest known as the author ofLaws of Form. He described himself as a "mathematician, consulting engineer,psychologist, educational consultant and practitioner, consultingpsychotherapist, author, and poet".[1] Born inGrimsby, Lincolnshire, England, Spencer-Brown attendedMill Hill Schooland then passed the First M.B. in 1940 atLondon Hospital Medical College[2](now part ofBarts and The London School of Medicine and Dentistry). After serving in the Royal Navy (1943–47), he studied atTrinity College, Cambridge, earning Honours in Philosophy (1950) and Psychology (1951), and where he metBertrand Russell. From 1952 to 1958, he taught philosophy atChrist Church, Oxford, took M.A. degrees in 1954 from both Oxford and Cambridge, and wrote his doctorate thesisProbability and Scientific Inferenceunder the supervision ofWilliam Knealewhich was published as a book in 1957.[3][4] During the 1960s, he became a disciple of the innovative Scottish psychiatristR. D. Laing, frequently cited inLaws of Form. In 1964, onBertrand Russell's recommendation, he became a lecturer in formal mathematics at theUniversity of London. From 1969 onward, he was affiliated with the Department of Pure Mathematics and Mathematical Statistics at theUniversity of Cambridge. In the 1970s and 1980s, he was visiting professor at theUniversity of Western Australia,Stanford University, and at theUniversity of Maryland, College Park.[citation needed] Laws of Form, at once a work of mathematics and of philosophy, emerged from work in electronic engineering Spencer-Brown did around 1960, and from lectures onmathematical logiche later gave under the auspices of the University of London's extension program. First published in 1969, it has never been out of print. Spencer-Brown referred to the mathematical system ofLaws of Formas the "primary algebra" and the "calculus of indications"; others have termed it "boundary algebra". The primary algebra is essentially an elegant minimalist notation for thetwo-element Boolean algebra. One core aspect of the text is the 'observer dilemma' that arises from the very situation of the observer to have decided on the object of observation - while inevitably leaving aside other objects. Such an un-observed object is attributed the 'unmarked state', the realm of all 'unmarked space'.[5] Laws of Formhas influenced, among others,Heinz von Foerster,Louis Kauffman,Niklas Luhmann,Humberto Maturana,Francisco Varela,Leon Conrad,[6]and William Bricken. Some of these authors have modified and extended the primary algebra, with interesting consequences. In a 1976 letter to the Editor ofNature, Spencer-Brown claimed a proof of thefour-color theorem, which is not computer-assisted.[7]The preface of the 1979 edition ofLaws of Formrepeats that claim, and further states that the generally accepted computational proof by Appel, Haken, and Koch has 'failed' (page xii). Spencer-Brown's claimed proof of the four-color theorem has yet to find any defenders; Kauffman provides a detailed review of parts of that work.[8][9] The 6th edition ofLaws of Formadvertises that it includes "the first-ever proof ofRiemann's hypothesis".[10] During his time at Cambridge,[clarification needed]Spencer-Brown was a chesshalf-blue. He held two world records as aglider pilot, and was a sportscorrespondentto theDaily Express.[11]He also wrote some novels and poems, sometimes employing the pen nameJames Keys. Spencer-Brown died on 25 August 2016.[citation needed]He was buried at theLondon Necropolis,Brookwood,Surrey.[citation needed] While not denying some of his talent, not all critics of Spencer-Brown's claims and writings have been willing to assess them at his own valuation; the poetry is at the most charitable reading an idiosyncratic taste, and some prominent voices have been decidedly dismissive of the value of his formal material. For exampleMartin Gardnerwrote in his essay: "M-Pire Maps": In December of 1976 G. Spencer-Brown, the maverick British mathematician, startled his colleagues by announcing he had a proof of the four-color theorem that did not require computer checking. Spencer-Brown's supreme confidence and his reputation as a mathematician brought him an invitation to give a seminar on his proof at Stanford University. At the end of three months all the experts who attended the seminar agreed that the proofs logic was laced with holes, but Spencer-Brown returned to England still sure of its validity. The "proof' has not yet been published.Spencer-Brown is the author of a curious little book calledLaws of Form,[12]which is essentially a reconstruction of thepropositional calculusby means of an eccentric notation. The book, which the British mathematician John Horton Conway once described asbeautifully written but "content-free,"has a large circle ofcounterculturedevotees.[13]
https://en.wikipedia.org/wiki/G._Spencer-Brown
TheNX bit(no-execute bit) is aprocessorfeature that separates areas of avirtual address space(the memory layout a program uses) into sections for storing data or program instructions. Anoperating systemsupporting the NX bit can mark certain areas of the virtual address space as non-executable, preventing the processor from running any code stored there. This technique, known asexecutable space protectionorWrite XOR Execute, protects computers from malicious software that attempts to insert harmful code into another program’s data storage area and execute it, such as in abuffer overflowattack. The term "NX bit" was introduced byAdvanced Micro Devices(AMD) as a marketing term.Intelmarkets this feature as theXD bit(execute disable), while theMIPS architecturerefers to it as theXI bit(execute inhibit). In theARM architecture, introduced inARMv6, it is known asXN(execute never).[1]The term NX bit is often used broadly to describe similar executable space protection technologies in other processors. x86processors, since the80286, included a similar capability implemented at thesegmentlevel. However, almost all operating systems for the80386and later x86 processors implement theflat memory model, so they cannot use this capability. There was no "Executable" flag in the page table entry (page descriptor) in those processors, until, to make this capability available to operating systems using the flat memory model, AMD added a "no-execute" or NX bit to the page table entry in itsAMD64architecture, providing a mechanism that can control execution perpagerather than per whole segment. Intel implemented a similar feature in itsItanium(Merced) processor—havingIA-64architecture—in 2001, but did not bring it to the more popular x86 processor families (Pentium,Celeron,Xeon, etc.). In the x86 architecture it was first implemented by AMD, as theNX bit, for use by itsAMD64line of processors, such as theAthlon 64andOpteron.[2] After AMD's decision to include this functionality in its AMD64 instruction set, Intel implemented the similar XD bit feature in x86 processors beginning with thePentium 4processors based on later iterations of the Prescott core.[3]The NX bit specifically refers to bit number 63 (i.e. the most significant bit) of a 64-bit entry in thepage table. If this bit is set to 0, then code can be executed from that page; if set to 1, code cannot be executed from that page, and anything residing there is assumed to be data. It is only available with the long mode (64-bit mode) or legacyPhysical Address Extension(PAE) page-table formats, but not x86's original 32-bit page table format because page table entries in that format lack the 64th bit used to disable and enable execution. Windows XP SP2 and later supportData Execution Prevention(DEP). InARMv6, a new page table entry format was introduced; it includes an "execute never" bit.[1]ForARMv8-A, VMSAv8-64 block and page descriptors, and VMSAv8-32 long-descriptor block and page descriptors, for stage 1 translations have "execute never" bits for both privileged and unprivileged modes, and block and page descriptors for stage 2 translations have a single "execute never" bit (two bits due to ARMv8.2-TTS2UXN feature); VMSAv8-32 short-descriptor translation table descriptors at level 1 have "execute never" bits for both privileged and unprivileged mode and at level 2 have a single "execute never" bit.[4] As of the Fourth Edition of the Alpha Architecture manual,DEC(now HP)Alphahas a Fault on Execute bit in page table entries with theOpenVMS,Tru64 UNIX, and Alpha LinuxPALcode.[5] The SPARC Reference MMU forSunSPARCversion 8 has permission values of Read Only, Read/Write, Read/Execute, and Read/Write/Execute in page table entries,[6]although not all SPARC processors have a SPARC Reference MMU. A SPARC version 9 MMU may provide, but is not required to provide, any combination of read/write/execute permissions.[7]A Translation Table Entry in a Translation Storage Buffer in Oracle SPARC Architecture 2011, Draft D1.0.0 has separate Executable and Writable bits.[8] Page table entries forIBMPowerPC's hashed page tables have a no-execute page bit.[9]Page table entries for radix-tree page tables in the Power ISA have separate permission bits granting read/write and execute access.[10] Translation lookaside buffer(TLB) entries and page table entries inPA-RISC1.1 and PA-RISC 2.0 support read-only, read/write, read/execute, and read/write/execute pages.[11][12] TLB entries inItaniumsupport read-only, read/write, read/execute, and read/write/execute pages.[13] As of the twelfth edition of thez/ArchitecturePrinciples of Operation, z/Architecture processors may support the Instruction-Execution Protection facility, which adds a bit in page table entries that controls whether instructions from a given region, segment, or page can be executed.[14]
https://en.wikipedia.org/wiki/NX_bit
Cyber threat intelligence(CTI) is a subfield ofcybersecuritythat focuses on the structured collection, analysis, and dissemination of data regarding potential or existingcyber threats.[1][2]It provides organizations with the insights necessary to anticipate, prevent, and respond tocyberattacksby understanding the behavior of threat actors, their tactics, and thevulnerabilitiesthey exploit.[3][4][5]Cyber threat intelligence sources includeopen source intelligence,social media intelligence,human Intelligence, technical intelligence, device log files, forensically acquired data or intelligence from the internet traffic and data derived for thedeepanddarkweb. In recent years, threat intelligence has become a crucial part of companies' cyber security strategy since it allows companies to be more proactive in their approach and determine which threats represent the greatest risks to a business. This puts companies on a more proactive front, actively trying to find their vulnerabilities and preventing hacks before they happen.[6]This method is gaining importance in recent years since, asIBMestimates, the most common method companies are hacked is via threat exploitation (47% of all attacks).[7] Threat vulnerabilities have risen in recent years also due to theCOVID-19 pandemicand more peopleworking from home- which makes companies' data more vulnerable. Due to the growing threats on one hand, and the growing sophistication needed for threat intelligence, many companies have opted in recent years to outsource their threat intelligence activities to amanaged security provider (MSSP).[8] The process of developing cyber threat intelligence is a circular and continuous process, known as the intelligence cycle, which is composed of five phases,[9][10][11][12]carried out by intelligence teams to provide to leadership relevant and convenient intelligence to reduce danger and uncertainty.[11] The five phases are: 1) planning and direction; 2) collection; 3) processing; 4) analysis; 5) dissemination.[9][10][11][12] In planning and directing, the customer of the intelligence product requests intelligence on a specific topic or objective. Then, once directed by the client, the second phase begins, collection, which involves accessing the raw information that will be required to produce the finished intelligence product. Since information is not intelligence, it must be transformed and therefore must go through the processing and analysis phases: in the processing (or pre-analytical phase) the raw information is filtered and prepared for analysis through a series of techniques (decryption, language translation, data reduction, etc.); In the analysis phase, organized information is transformed into intelligence. Finally, the dissemination phase, in which the newly selected threat intelligence is sent to the various users for their use.[10][12] There are three overarching, but not categorical - classes of cyber threat intelligence:[4]1) tactical; 2) operational; 3) strategic.[4][9][12][13][14]These classes are fundamental to building a comprehensive threat assessment.[9] Cyber threat intelligence provides a number of benefits, which include: There are three key elements that must be present for information or data to be considered threat intelligence:[12] Cyber threats involve the use of computers, storage devices, software networks and cloud-based repositories. Prior to, during or after acyber attacktechnical information about the information and operational technology, devices, network and computers between the attacker(s) and the victim(s) can be collected, stored and analyzed. However, identifying the person(s) behind an attack, their motivations, or the ultimate sponsor of the attack, - termed attribution is sometimes difficult,[20]as attackers can use deceptive tactics to evade detection or mislead analysts into drawing incorrect conclusions.[21]Multiple efforts[22][23][24]in threat intelligence emphasize understanding adversaryTTPsto tackle these issues.[25] A number of recent[when?]cyber threat intelligence analytical reports have been released by public and private sector organizations which attribute cyber attacks. This includes Mandiant's APT1 and APT28 reports,[26][27]US CERT's APT29 report,[28]and Symantec's Dragonfly, Waterbug Group and Seedworm reports.[29][30][31] In 2015 U.S. government legislation in the form of theCybersecurity Information Sharing Actencouraged the sharing of CTI indicators between government and private organizations. This act required the U.S. federal government to facilitate and promote four CTI objectives:[32] In 2016, the U.S. government agencyNational Institute of Standards and Technology(NIST) issued a publication (NIST SP 800-150) which further outlined the necessity for Cyber Threat Information Sharing as well as a framework for implementation.[33]
https://en.wikipedia.org/wiki/Cyber_threat_intelligence
Intheoretical linguisticsandcomputational linguistics,probabilistic context free grammars(PCFGs) extendcontext-free grammars, similar to howhidden Markov modelsextendregular grammars. Eachproductionis assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters viamachine learning. A probabilistic grammar's validity is constrained by context of its training dataset. PCFGs originated fromgrammar theory, and have application in areas as diverse asnatural language processingto the study the structure ofRNAmolecules and design ofprogramming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements. Derivation:The process of recursive generation of strings from a grammar. Parsing:Finding a valid derivation using an automaton. Parse Tree:The alignment of the grammar to a sequence. An example of a parser for PCFG grammars is thepushdown automaton. The algorithm parses grammar nonterminals from left to right in astack-likemanner. Thisbrute-forceapproach is not very efficient. In RNA secondary structure prediction variants of theCocke–Younger–Kasami (CYK) algorithmprovide more efficient alternatives to grammar parsing than pushdown automata.[1]Another example of a PCFG parser is the Stanford Statistical Parser which has been trained usingTreebank.[2] Similar to aCFG, a probabilistic context-free grammarGcan be defined by a quintuple: where PCFGs models extendcontext-free grammarsthe same way ashidden Markov modelsextendregular grammars. TheInside-Outside algorithmis an analogue of theForward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in modelparametrizationto estimate prior frequencies observed from training sequences in the case of RNAs. Dynamic programmingvariants of theCYK algorithmfind theViterbi parseof a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG. Context-free grammars are represented as a set of rules inspired from attempts to model natural languages.[3][4][5]The rules are absolute and have a typical syntax representation known asBackus–Naur form. The production rules consist of terminal{a,b}{\displaystyle \left\{a,b\right\}}and non-terminalSsymbols and a blankϵ{\displaystyle \epsilon }may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded.[1]An example of a grammar: This grammar can be shortened using the '|' ('or') character into: Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminalSthe emission can generate eitheraorborϵ{\displaystyle \epsilon }". Its derivation is: Ambiguous grammarmay result in ambiguous parsing if applied onhomographssince the same word sequence can have more than one interpretation.Pun sentencessuch as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses. One strategy of dealing with ambiguous parses (originating with grammarians as early asPāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated. Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered indiachronicshifts, these probabilistic rules can be re-learned, thus updating the grammar. Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential[6] Aweighted context-free grammar(WCFG) is a more general category ofcontext-free grammar, where each production has a numeric weight associated with it. The weight of a specificparse treein a WCFG is the product[7](or sum[8]) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithmsof[9][10])probabilities. An extended version of theCYK algorithmcan be used to find the "lightest" (least-weight) derivation of a string given some WCFG. When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set ofprobability distributions.[7] Since the 1990s, PCFG has been applied to modelRNA structures.[11][12][13][14][15] Energy minimization[16][17]and PCFG provide ways of predicting RNA secondary structure with comparable performance.[11][12][1]However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures[6]rather than by experimental determination as is the case with energy minimization methods.[18][19] The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled.[11][12][1]PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structuralmultiple sequence alignmentof related RNAs.[1] The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such asaabaabaa{\displaystyle aabaabaa}is derived by first generating the distala's on both sides before moving inwards: A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA.[6]However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible.[6][20] Every possible stringxa grammar generates is assigned a probability weightP(x|θ){\displaystyle P(x|\theta )}given the PCFG modelθ{\displaystyle \theta }. It follows that the sum of all probabilities to all possible grammar productions is∑xP(x|θ)=1{\displaystyle \sum _{\text{x}}P(x|\theta )=1}. The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds.[1][6] RNA secondary structure implementations based on PCFG approaches can be utilized in : Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences,[20]covariance models are used in searching databases for homologous sequences and RNA annotation and classification,[11][24]RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs.[25][26][27] PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale.[1]A grammar based model should be able to: The resulting of multipleparse treesper grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure. Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores.[6]Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one.[28][29][30]In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique.[31]Grammar ambiguity can be checked for by the conditional-inside algorithm.[1][6] A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminalS{\displaystyle \mathbf {\mathit {S}} }produces loops. The rest of the grammar proceeds with parameterL{\displaystyle \mathbf {\mathit {L}} }that decide whether a loop is a start of a stem or a single stranded regionsand parameterF{\displaystyle \mathbf {\mathit {F}} }that produces paired bases. The formalism of this simple PCFG looks like: The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of astructural alignmentin the production rules of the PCFG facilitates good prediction accuracy.[21] A summary of general steps for utilizing PCFGs in various scenarios: Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can followexpectation-maximizationparadigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence.[32][33]CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actualCYK algorithmused in non-probabilistic CFGs.[1] The inside algorithm calculatesα(i,j,v){\displaystyle \alpha (i,j,v)}probabilities for alli,j,v{\displaystyle i,j,v}of a parse subtree rooted atWv{\displaystyle W_{v}}for subsequencexi,...,xj{\displaystyle x_{i},...,x_{j}}. Outside algorithm calculatesβ(i,j,v){\displaystyle \beta (i,j,v)}probabilities of a complete parse tree for sequencexfrom root excluding the calculation ofxi,...,xj{\displaystyle x_{i},...,x_{j}}. The variablesαandβrefine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products ofαandβdivided by the probability for a sequencexgiven the modelP(x|θ){\displaystyle P(x|\theta )}. It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values ofαandβ.[32][33]The CYK algorithm calculatesγ(i,j,v){\displaystyle \gamma (i,j,v)}to find the most probable parse treeπ^{\displaystyle {\hat {\pi }}}and yieldslog⁡P(x,π^|θ){\displaystyle \log P(x,{\hat {\pi }}|\theta )}.[1] Memory and time complexity for general PCFG algorithms in RNA structure predictions areO(L2M){\displaystyle O(L^{2}M)}andO(L3M3){\displaystyle O(L^{3}M^{3})}respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods. Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure.[11][12]The RNA analysis package Infernal uses such profiles in inference of RNA alignments.[34]The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information.[24] CMs are designed from a consensus RNA structure. A CM allowsindelsof unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered.[1]Grammars in a CM are as follows: The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves.[1] In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree -log⁡e^{\displaystyle \log {\hat {e}}}- are calculated out of the emitting statesP,L,R{\displaystyle P,~L,~R}. Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score-log⁡P(x,π^|θ){\displaystyle \log {\text{P}}(x,{\hat {\pi }}|\theta )}- is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity ofO(MaD+MbD2){\displaystyle O(M_{a}D+M_{b}D^{2})}.[1] The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure.[20]In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset. In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems. For basepairXandYan occurrence ofXY{\displaystyle XY}is also counted as an occurrence ofYX{\displaystyle YX}. Identical basepairs such asXX{\displaystyle XX}are counted twice. By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences. First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different basesX, Ythe count of the difference is incremented for each sequence. For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible:[35] For basepairs a 16 X 16 rate distribution matrix is similarly generated.[36][37]The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm.[20] After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any columnCin a secondary structureσ{\displaystyle \sigma }for a sequenceDof lengthlsuch thatD=(C1,C2,...Cl){\displaystyle D=(C_{1},~C_{2},...C_{l})}can be scored with respect to the alignment treeTand the mutational modelM. The prior distribution given by the PCFG isP(σ|M){\displaystyle P(\sigma |M)}. The phylogenetic tree,Tcan be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done throughdynamic programming.[38] Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy.[21][32][33]The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%.[20]For instance: Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizingP(σ|D,T,M){\displaystyle P(\sigma |D,T,M)}through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure.[20] PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy.[20] Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of theamino acidalphabet and the variety of interactions seen in proteins make grammar inference much more challenging.[39]As a consequence, most applications offormal language theoryto protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions.[40][41]Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG.[39]Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns.
https://en.wikipedia.org/wiki/Stochastic_context-free_grammar
Theusageof alanguageis the ways in which itswrittenandspokenvariations are routinely employed by its speakers; that is, it refers to "the collective habits of a language's native speakers",[1]as opposed to idealized models of how a language works (or should work) in the abstract. For instance,Fowlercharacterized usage as "the way in which a word or phrase is normally and correctly used" and as the "points ofgrammar,syntax,style, and the choice of words."[2]In everyday usage, language is used differently, depending on the situation and individual.[3]Individual language users can shape language structures and language usage based on their community.[4] In thedescriptivetradition of language analysis, by way of contrast, "correct" tends to mean functionally adequate for the purposes of the speaker or writer using it, and adequatelyidiomaticto be accepted by the listener or reader; usage is also, however, a concern for theprescriptivetradition, for which "correctness" is a matter of arbitrating style.[5][6] Common usage may be used as one of the criteria of laying outprescriptive normsforcodifiedstandard languageusage.[7] Everyday language users, including editors and writers, look at dictionaries, style guides, usage guides, and other published authoritative works to help inform their language decisions. This takes place because of the perception that Standard English is determined by language authorities.[8]For many language users, the dictionary is the source of correct language use, as far as accurate vocabulary and spelling go.[9]Moderndictionariesare not generally prescriptive, but they often include "usage notes" which may describe words as "formal", "informal", "slang", and so on.[10]"Despite occasional usage notes,lexicographersgenerally disclaim any intent to guide writers and editors on the thorny points of English usage."[1] According to Jeremy Butterfield, "The first person we know of who madeusagerefer to language wasDaniel Defoe, at the end of the seventeenth century". Defoe proposed the creation of alanguage societyof 36 individuals who would setprescriptivelanguage rules for the approximately six million English speakers.[5] The Latin equivalentususwas a crucial term in the research of Danish linguistsOtto JespersenandLouis Hjelmslev.[11]They used the term to designate usage that has widespread or significant acceptance among speakers of a language, regardless of its conformity to the sanctioned standard language norms.[12]
https://en.wikipedia.org/wiki/Usage
Incomputer science, aprogramming languageis said to havefirst-class functionsif it treatsfunctionsasfirst-class citizens. This means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures.[1]Some programming language theorists require support foranonymous functions(function literals) as well.[2]In languages with first-class functions, thenamesof functions do not have any special status; they are treated like ordinaryvariableswith afunction type.[3]The term was coined byChristopher Stracheyin the context of "functions as first-class citizens" in the mid-1960s.[4] First-class functions are a necessity for thefunctional programmingstyle, in which the use ofhigher-order functionsis a standard practice. A simple example of a higher-ordered function is themapfunction, which takes, as its arguments, a function and a list, and returns the list formed by applying the function to each member of the list. For a language to supportmap, it must support passing a function as an argument. There are certain implementation difficulties in passing functions as arguments or returning them as results, especially in the presence ofnon-local variablesintroduced innestedandanonymous functions. Historically, these were termed thefunarg problems, the name coming fromfunction argument.[5]In early imperative languages these problems were avoided by either not supporting functions as result types (e.g.ALGOL 60,Pascal) or omitting nested functions and thus non-local variables (e.g.C). The early functional languageLisptook the approach ofdynamic scoping, where non-local variables refer to the closest definition of that variable at the point where the function is executed, instead of where it was defined. Proper support forlexically scopedfirst-class functions was introduced inSchemeand requires handling references to functions asclosuresinstead of barefunction pointers,[4]which in turn makesgarbage collectiona necessity.[citation needed] In this section, we compare how particular programming idioms are handled in a functional language with first-class functions (Haskell) compared to an imperative language where functions are second-class citizens (C). In languages where functions are first-class citizens, functions can be passed as arguments to other functions in the same way as other values (a function taking another function as argument is called a higher-order function). In the languageHaskell: Languages where functions are not first-class often still allow one to write higher-order functions through the use of features such asfunction pointersordelegates. In the languageC: There are a number of differences between the two approaches that arenotdirectly related to the support of first-class functions. The Haskell sample operates onlists, while the C sample operates onarrays. Both are the most natural compound data structures in the respective languages and making the C sample operate on linked lists would have made it unnecessarily complex. This also accounts for the fact that the C function needs an additional parameter (giving the size of the array.) The C function updates the arrayin-place, returning no value, whereas in Haskell data structures arepersistent(a new list is returned while the old is left intact.) The Haskell sample usesrecursionto traverse the list, while the C sample usesiteration. Again, this is the most natural way to express this function in both languages, but the Haskell sample could easily have been expressed in terms of afoldand the C sample in terms of recursion. Finally, the Haskell function has apolymorphictype, as this is not supported by C we have fixed all type variables to the type constantint. In languages supporting anonymous functions, we can pass such a function as an argument to a higher-order function: In a language which does not support anonymous functions, we have to bind it to a name instead: Once we have anonymous or nested functions, it becomes natural for them to refer to variables outside of their body (callednon-local variables): If functions are represented with bare function pointers, we can not know anymore how the value that is outside of the function's body should be passed to it, and because of that a closure needs to be built manually. Therefore we can not speak of "first-class" functions here. Also note that themapis now specialized to functions referring to twoints outside of their environment. This can be set up more generally, but requires moreboilerplate code. Iffwould have been anested functionwe would still have run into the same problem and this is the reason they are not supported in C.[6] When returning a function, we are in fact returning its closure. In the C example any local variables captured by the closure will go out of scope once we return from the function that builds the closure. Forcing the closure at a later point will result in undefined behaviour, possibly corrupting the stack. This is known as theupwards funarg problem. Assigningfunctions tovariablesand storing them inside (global) datastructures potentially suffers from the same difficulties as returning functions. As one can test most literals and values for equality, it is natural to ask whether a programming language can support testing functions for equality. On further inspection, this question appears more difficult and one has to distinguish between several types of function equality:[7] Intype theory, the type of functions accepting values of typeAand returning values of typeBmay be written asA→BorBA. In theCurry–Howard correspondence,function typesare related tological implication; lambda abstraction corresponds to discharging hypothetical assumptions and function application corresponds to themodus ponensinference rule. Besides the usual case of programming functions, type theory also uses first-class functions to modelassociative arraysand similardata structures. Incategory-theoreticalaccounts of programming, the availability of first-class functions corresponds to theclosed categoryassumption. For instance, thesimply typed lambda calculuscorresponds to the internal language ofCartesian closed categories. Functional programming languages, such asErlang,Scheme,ML,Haskell,F#, andScala, all have first-class functions. WhenLisp, one of the earliest functional languages, was designed, not all aspects of first-class functions were then properly understood, resulting in functions being dynamically scoped. The laterSchemeandCommon Lispdialects do have lexically scoped first-class functions. Many scripting languages, includingPerl,Python,PHP,Lua,Tcl/Tk,JavaScriptandIo, have first-class functions. For imperative languages, a distinction has to be made between Algol and its descendants such as Pascal, the traditional C family, and the modern garbage-collected variants. The Algol family has allowed nested functions and higher-order taking function as arguments, but not higher-order functions that return functions as results (except Algol 68, which allows this). The reason for this was that it was not known how to deal with non-local variables if a nested-function was returned as a result (and Algol 68 produces runtime errors in such cases). The C family allowed both passing functions as arguments and returning them as results, but avoided any problems by not supporting nested functions. (The gcc compiler allows them as an extension.) As the usefulness of returning functions primarily lies in the ability to return nested functions that have captured non-local variables, instead of top-level functions, these languages are generally not considered to have first-class functions. Modern imperative languages often support garbage-collection making the implementation of first-class functions feasible. First-class functions have often only been supported in later revisions of the language, including C# 2.0 and Apple's Blocks extension to C, C++, and Objective-C. C++11 has added support for anonymous functions and closures to the language, but because of the non-garbage collected nature of the language, special care has to be taken for non-local variables in functions to be returned as results (see below). Explicit partial application possible withstd::bind.
https://en.wikipedia.org/wiki/First-class_function
Inprobability theoryandstatistics, thegeneralized extreme value(GEV)distribution[2]is a family of continuousprobability distributionsdeveloped withinextreme value theoryto combine theGumbel,FréchetandWeibullfamilies also known as type I, II and III extreme value distributions. By theextreme value theoremthe GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables.[3]Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables. In some fields of application the generalized extreme value distribution is known as theFisher–Tippett distribution, named afterR.A. FisherandL.H.C. Tippettwho recognised three different forms outlined below. However usage of this name is sometimes restricted to mean the special case of theGumbel distribution. The origin of the common functional form for all three distributions dates back to at leastJenkinson (1955),[4]though allegedly[3]it could also have been given byvon Mises (1936).[5] Using the standardized variables=x−μσ{\displaystyle s={\tfrac {x-\mu }{\sigma }}}, whereμ{\displaystyle \mu }, the location parameter, can be any real number, andσ>0{\displaystyle \sigma >0}is the scale parameter; the cumulative distribution function of the GEV distribution is then whereξ{\displaystyle \xi }, the shape parameter, can be any real number. Thus, forξ>0{\displaystyle \xi >0}, the expression is valid fors>−1ξ{\displaystyle s>-{\tfrac {1}{\xi }}}, while forξ<0{\displaystyle \xi <0}it is valid fors<−1ξ{\displaystyle s<-{\tfrac {1}{\xi }}}. In the first case,−1ξ{\displaystyle -{\tfrac {1}{\xi }}}is the negative, lower end-point, whereF{\displaystyle F}is0; in the second case,−1ξ{\displaystyle -{\tfrac {1}{\xi }}}is the positive, upper end-point, whereF{\displaystyle F}is 1. Forξ=0{\displaystyle \xi =0}, the second expression is formally undefined and is replaced with the first expression, which is the result of taking the limit of the second, asξ→0{\displaystyle \xi \to 0}in which cases{\displaystyle s}can be any real number. In the special case ofx=μ{\displaystyle x=\mu }, we haves=0{\displaystyle s=0}, soF(0;ξ)=e−1≈0.368{\displaystyle F(0;\xi )=\mathrm {e} ^{-1}\approx 0.368}regardless of the values ofξ{\displaystyle \xi }andσ{\displaystyle \sigma }. The probability density function of the standardized distribution is again valid fors>−1ξ{\displaystyle s>-{\tfrac {1}{\xi }}}in the caseξ>0{\displaystyle \xi >0}, and fors<−1ξ{\displaystyle s<-{\tfrac {1}{\xi }}}in the caseξ<0{\displaystyle \xi <0}. The density is zero outside of the relevant range. In the caseξ=0{\displaystyle \xi =0}, the density is positive on the whole real line. Since the cumulative distribution function is invertible, the quantile function for the GEV distribution has an explicit expression, namely and therefore the quantile density functionq=dQdp{\displaystyle q={\tfrac {\mathrm {d} Q}{\mathrm {d} p}}}is valid forσ>0{\displaystyle \sigma >0}and for any realξ{\displaystyle \xi }. [6] Usinggk≡Γ(1−kξ){\displaystyle \ g_{k}\equiv \Gamma (1-k\ \xi )~}fork∈{1,2,3,4},{\displaystyle ~k\in \{\ 1,2,3,4\ \}\ ,}whereΓ(⋅){\displaystyle \ \Gamma (\cdot )\ }is thegamma function, some simple statistics of the distribution are given by:[citation needed] Theskewnessis The excesskurtosisis: The shape parameterξ{\displaystyle \ \xi \ }governs the tail behavior of the distribution. The sub-familiesdefined by three cases:ξ=0,{\displaystyle \ \xi =0\ ,}ξ>0,{\displaystyle \ \xi >0\ ,}andξ<0;{\displaystyle \ \xi <0\ ;}these correspond, respectively, to theGumbel,Fréchet, andWeibullfamilies, whose cumulative distribution functions are displayed below. The subsections below remark on properties of these distributions. The theory here relates to data maxima and the distribution being discussed is an extreme value distribution for maxima. A generalised extreme value distribution for data minima can be obtained, for example by substituting−x{\displaystyle \ -x\;}forx{\displaystyle \;x\;}in the distribution function, and subtracting the cumulative distribution from one: That is, replaceF(x){\displaystyle \ F(x)\ }with1−F(−x){\displaystyle \ 1-F(-x)\ }.Doing so yields yet another family of distributions. The ordinary Weibull distribution arises in reliability applications and is obtained from the distribution here by using the variablet=μ−x,{\displaystyle \ t=\mu -x\ ,}which gives a strictly positive support, in contrast to the use in the formulation of extreme value theory here. This arises because the ordinary Weibull distribution is used for cases that deal with dataminimarather than data maxima. The distribution here has an addition parameter compared to the usual form of the Weibull distribution and, in addition, is reversed so that the distribution has an upper bound rather than a lower bound. Importantly, in applications of the GEV, the upper bound is unknown and so must be estimated, whereas when applying the ordinary Weibull distribution in reliability applications the lower bound is usually known to be zero. Note the differences in the ranges of interest for the three extreme value distributions:Gumbelis unlimited,Fréchethas a lower limit, while the reversedWeibullhas an upper limit. More precisely,univariate extreme value theorydescribes which of the three is the limiting law according to the initial lawXand in particular depending on the original distribution's tail. One can link the type I to types II and III in the following way: If the cumulative distribution function of some random variableX{\displaystyle \ X\ }is of type II, and with the positive numbers as support, i.e.F(x;0,σ,α),{\displaystyle \ F(\ x;\ 0,\ \sigma ,\ \alpha \ )\ ,}then the cumulative distribution function ofln⁡X{\displaystyle \ln X}is of type I, namelyF(x;ln⁡σ,1α,0).{\displaystyle \ F(\ x;\ \ln \sigma ,\ {\tfrac {1}{\ \alpha \ }},\ 0\ )~.}Similarly, if the cumulative distribution function ofX{\displaystyle \ X\ }is of type III, and with the negative numbers as support, i.e.F(x;0,σ,−α),{\displaystyle \ F(\ x;\ 0,\ \sigma ,\ -\alpha \ )\ ,}then the cumulative distribution function ofln⁡(−X){\displaystyle \ \ln(-X)\ }is of type I, namelyF(x;−ln⁡σ,1α,0).{\displaystyle \ F(\ x;\ -\ln \sigma ,\ {\tfrac {\ 1\ }{\alpha }},\ 0\ )~.} Multinomial logitmodels, and certain other types oflogistic regression, can be phrased aslatent variablemodels witherror variablesdistributed asGumbel distributions(type I generalized extreme value distributions). This phrasing is common in the theory ofdiscrete choicemodels, which includelogit models,probit models, and various extensions of them, and derives from the fact that the difference of two type-I GEV-distributed variables follows alogistic distribution, of which thelogit functionis thequantile function. The type-I GEV distribution thus plays the same role in these logit models as thenormal distributiondoes in the corresponding probit models. Thecumulative distribution functionof the generalized extreme value distribution solves thestability postulateequation.[citation needed]The generalized extreme value distribution is a special case of a max-stable distribution, and is a transformation of a min-stable distribution. Let{Xi|1≤i≤n}{\displaystyle \ \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ }bei.i.d.normally distributedrandom variables with mean0and variance1. TheFisher–Tippett–Gnedenko theorem[12]tells us thatmax{Xi|1≤i≤n}∼GEV(μn,σn,0),{\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\sim GEV(\mu _{n},\sigma _{n},0)\ ,}where μn=Φ−1(1−1n)σn=Φ−1(1−1ne)−Φ−1(1−1n).{\displaystyle {\begin{aligned}\mu _{n}&=\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)\\\sigma _{n}&=\Phi ^{-1}\left(1-{\frac {1}{\ n\ \mathrm {e} \ }}\right)-\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)~.\end{aligned}}} This allow us to estimate e.g. the mean ofmax{Xi|1≤i≤n}{\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\ }from the mean of the GEV distribution: E⁡{max{Xi|1≤i≤n}}≈μn+γEσn=(1−γE)Φ−1(1−1n)+γEΦ−1(1−1en)=log⁡(n22πlog⁡(n22π))⋅(1+γlog⁡n+o(1log⁡n)),{\displaystyle {\begin{aligned}\operatorname {\mathbb {E} } \left\{\ \max \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ \right\}&\approx \mu _{n}+\gamma _{\mathsf {E}}\ \sigma _{n}\\&=(1-\gamma _{\mathsf {E}})\ \Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)+\gamma _{\mathsf {E}}\ \Phi ^{-1}\left(1-{\frac {1}{\ e\ n\ }}\right)\\&={\sqrt {\log \left({\frac {n^{2}}{\ 2\pi \ \log \left({\frac {n^{2}}{2\pi }}\right)\ }}\right)~}}\ \cdot \ \left(1+{\frac {\gamma }{\ \log n\ }}+{\mathcal {o}}\left({\frac {1}{\ \log n\ }}\right)\right)\ ,\end{aligned}}} whereγE{\displaystyle \ \gamma _{\mathsf {E}}\ }is theEuler–Mascheroni constant. 4. LetX∼Weibull(σ,μ),{\displaystyle \ X\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ ,}then the cumulative distribution ofg(x)=μ(1−σlog⁡Xσ){\displaystyle \ g(x)=\mu \left(1-\sigma \log {\frac {X}{\sigma }}\right)\ }is: 5. LetX∼Exponential(1),{\displaystyle \ X\sim {\textrm {Exponential}}(1)\ ,}then the cumulative distribution ofg(X)=μ−σlog⁡X{\displaystyle \ g(X)=\mu -\sigma \log X\ }is:
https://en.wikipedia.org/wiki/Generalized_extreme_value_distribution
On adifferentiable manifold, theexterior derivativeextends the concept of thedifferentialof a function todifferential formsof higher degree. The exterior derivative was first described in its current form byÉlie Cartanin 1899. The resulting calculus, known asexterior calculus, allows for a natural, metric-independent generalization ofStokes' theorem,Gauss's theorem, andGreen's theoremfrom vector calculus. If a differentialk-form is thought of as measuring thefluxthrough an infinitesimalk-parallelotopeat each point of the manifold, then its exterior derivative can be thought of as measuring the net flux through the boundary of a(k+ 1)-parallelotope at each point. The exterior derivative of adifferential formof degreek(also differentialk-form, or justk-form for brevity here) is a differential form of degreek+ 1. Iffis asmooth function(a0-form), then the exterior derivative offis thedifferentialoff. That is,dfis the unique1-formsuch that for every smoothvector fieldX,df(X) =dXf, wheredXfis thedirectional derivativeoffin the direction ofX. The exterior product of differential forms (denoted with the same symbol∧) is defined as theirpointwiseexterior product. There are a variety of equivalent definitions of the exterior derivative of a generalk-form. The exterior derivative is defined to be the uniqueℝ-linear mapping fromk-forms to(k+ 1)-forms that has the following properties: Iff{\displaystyle f}andg{\displaystyle g}are two0{\displaystyle 0}-forms (functions), then from the third property for the quantityd(f∧g){\displaystyle d(f\wedge g)}, which is simplyd(fg){\displaystyle d(fg)}, the familiar product ruled(fg)=gdf+fdg{\displaystyle d(fg)=g\,df+f\,dg}is recovered. The third property can be generalised, for instance, ifα{\displaystyle \alpha }is ak{\displaystyle k}-form,β{\displaystyle \beta }is anl{\displaystyle l}-form andγ{\displaystyle \gamma }is anm{\displaystyle m}-form, then Alternatively, one can work entirely in alocal coordinate system(x1, ...,xn). The coordinate differentialsdx1, ...,dxnform a basis of the space of one-forms, each associated with a coordinate. Given amulti-indexI= (i1, ...,ik)with1 ≤ip≤nfor1 ≤p≤k(and denotingdxi1∧ ... ∧dxikwithdxI), the exterior derivative of a (simple)k-form overℝnis defined as (using theEinstein summation convention). The definition of the exterior derivative is extendedlinearlyto a generalk-form (which is expressible as a linear combination of basic simplek{\displaystyle k}-forms) where each of the components of the multi-indexIrun over all the values in{1, ...,n}. Note that wheneverjequals one of the components of the multi-indexIthendxj∧dxI= 0(seeExterior product). The definition of the exterior derivative in local coordinates follows from the precedingdefinition in terms of axioms. Indeed, with thek-formφas defined above, Here, we have interpretedgas a0-form, and then applied the properties of the exterior derivative. This result extends directly to the generalk-formωas In particular, for a1-formω, the components ofdωinlocal coordinatesare Caution: There are two conventions regarding the meaning ofdxi1∧⋯∧dxik{\displaystyle dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}}. Most current authors[citation needed]have the convention that while in older texts like Kobayashi and Nomizu or Helgason Alternatively, an explicit formula can be given[1]for the exterior derivative of ak-formω, when paired withk+ 1arbitrary smoothvector fieldsV0,V1, ...,Vk: where[Vi,Vj]denotes theLie bracketand a hat denotes the omission of that element: In particular, whenωis a1-form we have thatdω(X,Y) =dX(ω(Y)) −dY(ω(X)) −ω([X,Y]). Note:With the conventions of e.g., Kobayashi–Nomizu and Helgason the formula differs by a factor of⁠1/k+ 1⁠: Example 1.Considerσ=udx1∧dx2over a1-form basisdx1, ...,dxnfor a scalar fieldu. The exterior derivative is: The last formula, where summation starts ati= 3, follows easily from the properties of theexterior product. Namely,dxi∧dxi= 0. Example 2.Letσ=udx+vdybe a1-form defined overℝ2. By applying the above formula to each term (considerx1=xandx2=y) we have the sum IfMis a compact smooth orientablen-dimensional manifold with boundary, andωis an(n− 1)-form onM, thenthe generalized form of Stokes' theoremstates that Intuitively, if one thinks ofMas being divided into infinitesimal regions, and one adds the flux through the boundaries of all the regions, the interior boundaries all cancel out, leaving the total flux through the boundary ofM. Ak-formωis calledclosedifdω= 0; closed forms are thekernelofd.ωis calledexactifω=dαfor some(k− 1)-formα; exact forms are theimageofd. Becaused2= 0, every exact form is closed. ThePoincaré lemmastates that in a contractible region, the converse is true. Because the exterior derivativedhas the property thatd2= 0, it can be used as thedifferential(coboundary) to definede Rham cohomologyon a manifold. Thek-th de Rham cohomology (group) is the vector space of closedk-forms modulo the exactk-forms; as noted in the previous section, the Poincaré lemma states that these vector spaces are trivial for a contractible region, fork> 0. Forsmooth manifolds, integration of forms gives a natural homomorphism from the de Rham cohomology to the singular cohomology overℝ. The theorem of de Rham shows that this map is actually an isomorphism, a far-reaching generalization of the Poincaré lemma. As suggested by the generalized Stokes' theorem, the exterior derivative is the "dual" of theboundary mapon singular simplices. The exterior derivative is natural in the technical sense: iff:M→Nis a smooth map andΩkis the contravariant smoothfunctorthat assigns to each manifold the space ofk-forms on the manifold, then the following diagram commutes sod(f∗ω) =f∗dω, wheref∗denotes thepullbackoff. This follows from thatf∗ω(·), by definition, isω(f∗(·)),f∗being thepushforwardoff. Thusdis anatural transformationfromΩktoΩk+1. Mostvector calculusoperators are special cases of, or have close relationships to, the notion of exterior differentiation. Asmooth functionf:M→ ℝon a real differentiable manifoldMis a0-form. The exterior derivative of this0-form is the1-formdf. When an inner product⟨·,·⟩is defined, thegradient∇fof a functionfis defined as the unique vector inVsuch that its inner product with any element ofVis the directional derivative offalong the vector, that is such that That is, where♯denotes themusical isomorphism♯:V∗→Vmentioned earlier that is induced by the inner product. The1-formdfis a section of thecotangent bundle, that gives a local linear approximation tofin the cotangent space at each point. A vector fieldV= (v1,v2, ...,vn)onℝnhas a corresponding(n− 1)-form wheredxi^{\displaystyle {\widehat {dx^{i}}}}denotes the omission of that element. (For instance, whenn= 3, i.e. in three-dimensional space, the2-formωVis locally thescalar triple productwithV.) The integral ofωVover a hypersurface is thefluxofVover that hypersurface. The exterior derivative of this(n− 1)-form is then-form A vector fieldVonℝnalso has a corresponding1-form Locally,ηVis thedot productwithV. The integral ofηValong a path is theworkdone against−Valong that path. Whenn= 3, in three-dimensional space, the exterior derivative of the1-formηVis the2-form The standardvector calculusoperators can be generalized for anypseudo-Riemannian manifold, and written in coordinate-free notation as follows: where⋆is theHodge star operator,♭and♯are themusical isomorphisms,fis ascalar fieldandFis avector field. Note that the expression forcurlrequires♯to act on⋆d(F♭), which is a form of degreen− 2. A natural generalization of♯tok-forms of arbitrary degree allows this expression to make sense for anyn.
https://en.wikipedia.org/wiki/Exterior_derivative
Incomputer science,model checkingorproperty checkingis a method for checking whether afinite-state modelof a system meets a givenspecification(also known ascorrectness). This is typically associated withhardwareorsoftware systems, where the specification contains liveness requirements (such as avoidance oflivelock) as well as safety requirements (such as avoidance of states representing asystem crash). In order to solve such a problemalgorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task inlogic, namely to check whether astructuresatisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in thepropositional logicis satisfied by a given structure. Property checking is used forverificationwhen two descriptions are not equivalent. Duringrefinement, the specification is complemented with details that areunnecessaryin the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.[2] An important class of model-checking methods has been developed for checking models ofhardwareandsoftwaredesigns where the specification is given by atemporal logicformula. Pioneering work in temporal logic specification was done byAmir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science".[3]Model checking began with the pioneering work ofE. M. Clarke,E. A. Emerson,[4][5][6]by J. P. Queille, andJ. Sifakis.[7]Clarke, Emerson, and Sifakis shared the 2007Turing Awardfor their seminal work founding and developing the field of model checking.[8][9] Model checking is most often applied to hardware designs. For software, because of undecidability (seecomputability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. Inembedded-systemshardware, it is possible to validate a specification delivered, e.g., by means ofUML activity diagrams[10]or control-interpretedPetri nets.[11] The structure is usually given as a source code description in an industrialhardware description languageor a special-purpose language. Such a program corresponds to afinite-state machine(FSM), i.e., adirected graphconsisting of nodes (orvertices) andedges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. Thenodesrepresent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution.[12] Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formulap{\displaystyle p}, and a structureM{\displaystyle M}with initial states{\displaystyle s}, decide ifM,s⊨p{\displaystyle M,s\models p}. IfM{\displaystyle M}is finite, as it is in hardware, model checking reduces to agraph search. Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state-space traversal is based on representations of a set of states and transition relations as logical formulas,binary decision diagrams(BDD) or other related data structures, the model-checking method issymbolic. Historically, the first symbolic methods usedBDDs. After the success ofpropositional satisfiabilityin solving theplanningproblem inartificial intelligence(seesatplan) in 1996, the same approach was generalized to model checking forlinear temporal logic(LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking.[13]The success ofBoolean satisfiability solversin bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking.[14] One example of such a system requirement:Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula:[15] Here,◻{\displaystyle \Box }should be read as "always",◊{\displaystyle \Diamond }as "eventually",U{\displaystyle {\mathcal {U}}}as "until" and the other symbols are standard logical symbols,∨{\displaystyle \lor }for "or",∧{\displaystyle \land }for "and" and¬{\displaystyle \lnot }for "not". Model-checking tools face a combinatorial blow up of the state-space, commonly known as thestate explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem. Model-checking tools were initially developed to reason about the logical correctness ofdiscrete statesystems, but have since been extended to deal with real-time and limited forms ofhybrid systems. Model checking is also studied in the field ofcomputational complexity theory. Specifically, afirst-order logicalformula is fixed withoutfree variablesand the followingdecision problemis considered: Given a finiteinterpretation, for instance, one described as arelational database, decide whether the interpretation is a model of the formula. This problem is in thecircuit classAC0. It istractablewhen imposing some restrictions on the input structure: for instance, requiring that it hastreewidthbounded by a constant (which more generally implies the tractability of model checking formonadic second-order logic), bounding thedegreeof every domain element, and more general conditions such asbounded expansion, locally bounded expansion, and nowhere-dense structures.[21]These results have been extended to the task ofenumeratingall solutions to a first-order formula with free variables.[citation needed] Here is a list of significant model-checking tools:
https://en.wikipedia.org/wiki/Temporal_logic_in_finite-state_verification
Mixed radixnumeral systemsarenon-standard positional numeral systemsin which the numericalbasevaries from position to position. Such numerical representation applies when a quantity is expressed using a sequence of units that are each a multiple of the next smaller one, but not by the same factor. Such units are common for instance in measuring time; a time of 32 weeks, 5 days, 7 hours, 45 minutes, 15 seconds, and 500 milliseconds might be expressed as a number of minutes in mixed-radix notation as: or as In the tabular format, the digits are written above their base, and asemicolonindicates theradix point. In numeral format, each digit has its associated base attached as a subscript, and the radix point is marked by afull stop or period. The base for each digit is the number of corresponding units that make up the next larger unit. As a consequence there is no base (written as ∞) for the first (most significant) digit, since here the "next larger unit" does not exist (and one could not add a larger unit of "month" or "year" to the sequence of units, as they are not integer multiples of "week"). The most familiar example of mixed-radix systems is in timekeeping and calendars. Western time radices include, both cardinally and ordinally,decimalyears, decades, and centuries,septenaryfor days in a week,duodecimalmonths in a year, bases 28–31 for days within a month, as well as base 52 for weeks in a year. Time is further divided into hours counted inbase 24hours,sexagesimalminutes within an hour and seconds within a minute, with decimal fractions of the latter. A standard form for dates is2021-04-10 16:31:15, which would be a mixed radix number by this definition, with the consideration that the quantities of days vary both per month, and with leap years. One proposed calendar instead usesbase 13months,quaternaryweeks, and septenary days. A mixed radix numeral system is often best expressed with a table. A table describing what can be understood as the 604800 seconds of a week is as follows, with the week beginning on hour 0 of day 0 (midnight on Sunday): In this numeral system, the mixed radix numeral 37172451605760seconds would be interpreted as 17:51:57 on Wednesday, and 0702402602460would be 00:02:24 on Sunday. Ad hoc notations for mixed radix numeral systems are commonplace. TheMaya calendarconsists of several overlapping cycles of different radices. A short counttzolk'inoverlapsbase 20named days withtridecimalnumbered days. Ahaab'consists of vigesimal days,octodecimalmonths, and base-52 years forming around. In addition, along countof vigesimal days, octodecimalwinal, then base 24tun,k'atun,b'ak'tun, etc., tracks historical dates. A second example of a mixed-radix numeral system in current use is in the design and use of currency, where a limited set of denominations are printed or minted with the objective of being able to represent any monetary quantity; the amount of money is then represented by the number of coins or banknotes of each denomination. When deciding which denominations to create (and hence which radices to mix), a compromise is aimed for between a minimal number of different denominations, and a minimal number of individual pieces of coinage required to represent typical quantities. So, for example, in the UK, banknotes are printed for £50, £20, £10 and £5, and coins are minted for £2, £1, 50p, 20p, 10p, 5p, 2p and 1p—these follow the1-2-5 series of preferred values. Prior todecimalisation, monetary amounts in the UK were described in terms of pounds, shillings, and pence, with 12 pence per shilling and 20 shillings per pound, so that "£1 7s 6d", for example, corresponded to the mixed-radix numeral 1∞720612. United States customary unitsare generally mixed radix systems, with multipliers varying from one size unit to the next in the same manner that units of time do. Mixed-radix representation is also relevant to mixed-radix versions of theCooley–Tukey FFT algorithm, in which the indices of the input values are expanded in a mixed-radix representation, the indices of the output values are expanded in a corresponding mixed-radix representation with the order of the bases and digits reversed, and each subtransform can be regarded as a Fourier transform in one digit for all values of the remaining digits. Mixed-radix numbers of the same base can be manipulated using a generalization of manual arithmetic algorithms. Conversion of values from one mixed base to another is easily accomplished by first converting the place values of the one system into the other, and then applying the digits from the one system against these. APLandJinclude operators to convert to and from mixed-radix systems. Another proposal is the so-calledfactorialnumber system: For example, the biggest number that could be represented with six digits would be 543210 which equals 719 indecimal: 5×5! + 4×4! + 3×3! + 2×2! + 1×1! It might not be clear at first sight but the factorial based numbering system is unambiguous and complete. Every number can be represented in one and only one way because the sum of respective factorials multiplied by the index is always the next factorial minus one: There is a natural mapping between the integers 0, ...,n! − 1 andpermutationsofnelements in lexicographic order, which uses the factorial representation of the integer, followed by an interpretation as aLehmer code. The above equation is a particular case of the following general rule for any radix (either standard or mixed) base representation which expresses the fact that any radix (either standard or mixed) base representation is unambiguous and complete. Every number can be represented in one and only one way because the sum of respective weights multiplied by the index is always the next weight minus one: which can be easily proved withmathematical induction. Another proposal is the number system with successive prime numbers as radix, whose place values areprimorialnumbers, considered byS. S. Pillai,[1]Richard K. Guy(sequenceA049345in theOEIS), and other authors:[2][3][4]
https://en.wikipedia.org/wiki/Mixed_radix
Inapplied mathematics,k-SVDis adictionary learningalgorithm for creating a dictionary forsparse representations, via asingular value decompositionapproach.k-SVD is a generalization of thek-means clusteringmethod, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. It is structurally related to theexpectation–maximization (EM) algorithm.[1][2]k-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis. k-SVD is a kind of generalization ofk-means, as follows. Thek-means clusteringcan be also regarded as a method ofsparse representation. That is, finding the best possible codebook to represent the data samples{yi}i=1M{\displaystyle \{y_{i}\}_{i=1}^{M}}bynearest neighbor, by solving which is nearly equivalent to which is k-means that allows "weights". The letter F denotes theFrobenius norm. The sparse representation termxi=ek{\displaystyle x_{i}=e_{k}}enforcesk-means algorithm to use only one atom (column) in dictionaryD{\displaystyle D}. To relax this constraint, the target of thek-SVD algorithm is to represent the signal as a linear combination of atoms inD{\displaystyle D}. Thek-SVD algorithm follows the construction flow of thek-means algorithm. However, in contrast tok-means, in order to achieve a linear combination of atoms inD{\displaystyle D}, the sparsity term of the constraint is relaxed so that the number of nonzero entries of each columnxi{\displaystyle x_{i}}can be more than 1, but less than a numberT0{\displaystyle T_{0}}. So, the objective function becomes or in another objective form In thek-SVD algorithm, theD{\displaystyle D}is first fixed and the best coefficient matrixX{\displaystyle X}is found. As finding the truly optimalX{\displaystyle X}is hard, we use an approximation pursuit method. Any algorithm such as OMP, the orthogonalmatching pursuitcan be used for the calculation of the coefficients, as long as it can supply a solution with a fixed and predetermined number of nonzero entriesT0{\displaystyle T_{0}}. After the sparse coding task, the next is to search for a better dictionaryD{\displaystyle D}. However, finding the whole dictionary all at a time is impossible, so the process is to update only one column of the dictionaryD{\displaystyle D}each time, while fixingX{\displaystyle X}. The update of thek{\displaystyle k}-th column is done by rewriting the penalty term as wherexkT{\displaystyle x_{k}^{\text{T}}}denotes thek-th row ofX. By decomposing the multiplicationDX{\displaystyle DX}into sum ofK{\displaystyle K}rank 1 matrices, we can assume the otherK−1{\displaystyle K-1}terms are assumed fixed, and thek{\displaystyle k}-th remains unknown. After this step, we can solve the minimization problem by approximate theEk{\displaystyle E_{k}}term with arank−1{\displaystyle rank-1}matrix usingsingular value decomposition, then updatedk{\displaystyle d_{k}}with it. However, the new solution for the vectorxkT{\displaystyle x_{k}^{\text{T}}}is not guaranteed to be sparse. To cure this problem, defineωk{\displaystyle \omega _{k}}as which points to examples{yi}i=1N{\displaystyle \{y_{i}\}_{i=1}^{N}}that use atomdk{\displaystyle d_{k}}(also the entries ofxi{\displaystyle x_{i}}that is nonzero). Then, defineΩk{\displaystyle \Omega _{k}}as a matrix of sizeN×|ωk|{\displaystyle N\times |\omega _{k}|}, with ones on the(i,ωk(i))th{\displaystyle (i,\omega _{k}(i)){\text{th}}}entries and zeros otherwise. When multiplyingx~kT=xkTΩk{\displaystyle {\tilde {x}}_{k}^{\text{T}}=x_{k}^{\text{T}}\Omega _{k}}, this shrinks the row vectorxkT{\displaystyle x_{k}^{\text{T}}}by discarding the zero entries. Similarly, the multiplicationY~k=YΩk{\displaystyle {\tilde {Y}}_{k}=Y\Omega _{k}}is the subset of the examples that are current using thedk{\displaystyle d_{k}}atom. The same effect can be seen onE~k=EkΩk{\displaystyle {\tilde {E}}_{k}=E_{k}\Omega _{k}}. So the minimization problem as mentioned before becomes and can be done by directly using SVD. SVD decomposesE~k{\displaystyle {\tilde {E}}_{k}}intoUΔVT{\displaystyle U\Delta V^{\text{T}}}. The solution fordk{\displaystyle d_{k}}is the first column of U, the coefficient vectorx~kT{\displaystyle {\tilde {x}}_{k}^{\text{T}}}as the first column ofV×Δ(1,1){\displaystyle V\times \Delta (1,1)}. After updating the whole dictionary, the process then turns to iteratively solve X, then iteratively solve D. Choosing an appropriate "dictionary" for a dataset is a non-convex problem, andk-SVD operates by an iterative update which does not guarantee to find the global optimum.[2]However, this is common to other algorithms for this purpose, andk-SVD works fairly well in practice.[2][better source needed]
https://en.wikipedia.org/wiki/K-SVD
Incomputational complexity theory,bounded-error quantum polynomial time(BQP) is the class ofdecision problemssolvable by aquantum computerinpolynomial time, with an error probability of at most 1/3 for all instances.[1]It is the quantum analogue to thecomplexity classBPP. A decision problem is a member ofBQPif there exists aquantum algorithm(analgorithmthat runs on a quantum computer) that solves the decision problemwith high probabilityand is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. BQPcan be viewed as thelanguagesassociated with certain bounded-error uniform families ofquantum circuits.[1]A languageLis inBQPif and only if there exists apolynomial-time uniformfamily of quantum circuits{Qn:n∈N}{\displaystyle \{Q_{n}\colon n\in \mathbb {N} \}}, such that Alternatively, one can defineBQPin terms ofquantum Turing machines. A languageLis inBQPif and only if there exists a polynomial quantum Turing machine that acceptsLwith an error probability of at most 1/3 for all instances.[2] Similarly to other "bounded error" probabilistic classes, the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using theChernoff bound. The complexity class is unchanged by allowing error as high as 1/2 −n−con the one hand, or requiring error as small as 2−ncon the other hand, wherecis any positive constant, andnis the length of input.[3] BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally forprobabilistic Turing machines) isBPP. Just likePandBPP,BQPislowfor itself, which meansBQPBQP= BQP.[2]Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time. BQPcontainsPandBPPand is contained inAWPP,[4]PP[5]andPSPACE.[2]In fact,BQPislowforPP, meaning that aPPmachine achieves no benefit from being able to solveBQPproblems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are: As the problem of⁠P=?PSPACE{\displaystyle {\mathsf {P}}\ {\stackrel {?}{=}}\ {\mathsf {PSPACE}}}⁠has not yet been solved, the proof of inequality betweenBQPand classes mentioned above is supposed to be difficult.[2]The relation betweenBQPandNPis not known. In May 2018, computer scientistsRan RazofPrinceton Universityand Avishay Tal ofStanford Universitypublished a paper[6]which showed that, relative to anoracle, BQP was not contained inPH. It can be proven that there exists an oracle A such thatBQPA⊈PHA{\displaystyle {\mathsf {BQP}}^{\mathrm {A} }\nsubseteq {\mathsf {PH}}^{\mathrm {A} }}.[7]In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQPA) can do things PHAcannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH. It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in thepolynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder thanNP-Completeproblems. Paired with the fact that many practical BQP problems are suspected to exist outside ofP(it is suspected and not verified because there is no proof thatP ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing.[7] AddingpostselectiontoBQPresults in the complexity classPostBQPwhich is equal toPP.[8][9] Promise-BQP is the class ofpromise problemsthat can be solved by a uniform family of quantum circuits (i.e., within BQP).[10]Completeness proofs focus on this version of BQP. Similar to the notion ofNP-completenessand othercompleteproblems, we can define a complete problem as a problem that is in Promise-BQP and that every other problem in Promise-BQP reduces to it in polynomial time. The APPROX-QCIRCUIT-PROB problem is complete for efficient quantum computation, and the version presented below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class, for which no complete problems are known). APPROX-QCIRCUIT-PROB's completeness makes it useful for proofs showing the relationships between other complexity classes and BQP. Given a description of a quantum circuitCacting onnqubits withmgates, wheremis a polynomial innand each gate acts on one or two qubits, and two numbersα,β∈[0,1],α>β{\displaystyle \alpha ,\beta \in [0,1],\alpha >\beta }, distinguish between the following two cases: Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases. Claim.Any BQP problem reduces to APPROX-QCIRCUIT-PROB. Proof.Suppose we have an algorithmAthat solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuitCacting onnqubits, and two numbersα,β∈[0,1],α>β{\displaystyle \alpha ,\beta \in [0,1],\alpha >\beta },Adistinguishes between the above two cases. We can solve any problem in BQP with this oracle, by settingα=2/3,β=1/3{\displaystyle \alpha =2/3,\beta =1/3}. For anyL∈BQP{\displaystyle L\in {\mathsf {BQP}}}, there exists family of quantum circuits{Qn:n∈N}{\displaystyle \{Q_{n}\colon n\in \mathbb {N} \}}such that for alln∈N{\displaystyle n\in \mathbb {N} }, a state|x⟩{\displaystyle |x\rangle }ofn{\displaystyle n}qubits, ifx∈L,Pr(Qn(|x⟩)=1)≥2/3{\displaystyle x\in L,Pr(Q_{n}(|x\rangle )=1)\geq 2/3}; else ifx∉L,Pr(Qn(|x⟩)=0)≥2/3{\displaystyle x\notin L,Pr(Q_{n}(|x\rangle )=0)\geq 2/3}. Fix an input|x⟩{\displaystyle |x\rangle }ofnqubits, and the corresponding quantum circuitQn{\displaystyle Q_{n}}. We can first construct a circuitCx{\displaystyle C_{x}}such thatCx|0⟩⊗n=|x⟩{\displaystyle C_{x}|0\rangle ^{\otimes n}=|x\rangle }. This can be done easily by hardwiring|x⟩{\displaystyle |x\rangle }and apply a sequence ofCNOTgates to flip the qubits. Then we can combine two circuits to getC′=QnCx{\displaystyle C'=Q_{n}C_{x}}, and nowC′|0⟩⊗n=Qn|x⟩{\displaystyle C'|0\rangle ^{\otimes n}=Q_{n}|x\rangle }. And finally, necessarily the results ofQn{\displaystyle Q_{n}}is obtained by measuring several qubits and apply some (classical) logic gates to them. We can alwaysdefer the measurement[11][12]and reroute the circuits so that by measuring the first qubit ofC′|0⟩⊗n=Qn|x⟩{\displaystyle C'|0\rangle ^{\otimes n}=Q_{n}|x\rangle }, we get the output. This will be our circuitC, and we decide the membership ofx∈L{\displaystyle x\in L}by runningA(C){\displaystyle A(C)}withα=2/3,β=1/3{\displaystyle \alpha =2/3,\beta =1/3}. By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), soL∈BQP{\displaystyle L\in {\mathsf {BQP}}}reduces to APPROX-QCIRCUIT-PROB. We begin with an easier containment. To show thatBQP⊆EXP{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {EXP}}}, it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete. Claim—APPROX-QCIRCUIT-PROB∈EXP{\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {EXP}}} The idea is simple. Since we have exponential power, given a quantum circuitC, we can use classical computer to stimulate each gate inCto get the final state. More formally, letCbe a polynomial sized quantum circuit onnqubits andmgates, where m is polynomial in n. Let|ψ0⟩=|0⟩⊗n{\displaystyle |\psi _{0}\rangle =|0\rangle ^{\otimes n}}and|ψi⟩{\displaystyle |\psi _{i}\rangle }be the state after thei-th gate in the circuit is applied to|ψi−1⟩{\displaystyle |\psi _{i-1}\rangle }. Each state|ψi⟩{\displaystyle |\psi _{i}\rangle }can be represented in a classical computer as a unit vector inC2n{\displaystyle \mathbb {C} ^{2^{n}}}. Furthermore, each gate can be represented by a matrix inC2n×2n{\displaystyle \mathbb {C} ^{2^{n}\times 2^{n}}}. Hence, the final state|ψm⟩{\displaystyle |\psi _{m}\rangle }can be computed inO(m⋅22n){\displaystyle O(m\cdot 2^{2n})}time, and therefore all together, we have an2O(n){\displaystyle 2^{O(n)}}time algorithm for calculating the final state, and thus the probability that the first qubit is measured to be one. This implies thatAPPROX-QCIRCUIT-PROB∈EXP{\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {EXP}}}. Note that this algorithm also requires2O(n){\displaystyle 2^{O(n)}}space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity. Sum of histories is a technique introduced by physicistRichard Feynmanforpath integral formulation. APPROX-QCIRCUIT-PROB can be formulated in the sum of histories technique to show thatBQP⊆PSPACE{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {PSPACE}}}.[13] Consider a quantum circuitC, which consists oftgates,g1,g2,⋯,gm{\displaystyle g_{1},g_{2},\cdots ,g_{m}}, where eachgj{\displaystyle g_{j}}comes from auniversal gate setand acts on at most two qubits. To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input|0⟩⊗n{\displaystyle |0\rangle ^{\otimes n}}, and each node in the tree has2n{\displaystyle 2^{n}}children, each representing a state inCn{\displaystyle \mathbb {C} ^{n}}. The weight on a tree edge from a node inj-th level representing a state|x⟩{\displaystyle |x\rangle }to a node inj+1{\displaystyle j+1}-th level representing a state|y⟩{\displaystyle |y\rangle }is⟨y|gj+1|x⟩{\displaystyle \langle y|g_{j+1}|x\rangle }, the amplitude of|y⟩{\displaystyle |y\rangle }after applyinggj+1{\displaystyle g_{j+1}}on|x⟩{\displaystyle |x\rangle }. The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being|ψ⟩{\displaystyle |\psi \rangle }, we sum up the amplitudes of all root-to-leave paths that ends at a node representing|ψ⟩{\displaystyle |\psi \rangle }. More formally, for the quantum circuitC, its sum over histories tree is a tree of depthm, with one level for each gategi{\displaystyle g_{i}}in addition to the root, and with branching factor2n{\displaystyle 2^{n}}. Define—A history is a path in the sum of histories tree. We will denote a history by a sequence(u0=|0⟩⊗n→u1→⋯→um−1→um=x){\displaystyle (u_{0}=|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{m-1}\rightarrow u_{m}=x)}for some final statex. Define—Letu,v∈{0,1}n{\displaystyle u,v\in \{0,1\}^{n}}. Let amplitude of the edge(|u⟩,|v⟩){\displaystyle (|u\rangle ,|v\rangle )}in thej-th level of the sum over histories tree beαj(u→v)=⟨v|gj|u⟩{\displaystyle \alpha _{j}(u\rightarrow v)=\langle v|g_{j}|u\rangle }. For any historyh=(u0→u1→⋯→um−1→um){\displaystyle h=(u_{0}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{m-1}\rightarrow u_{m})}, the transition amplitude of the history is the productαh=α1(|0⟩⊗n→u1)α2(u1→u2)⋯αm(um−1→x){\displaystyle \alpha _{h}=\alpha _{1}(|0\rangle ^{\otimes n}\rightarrow u_{1})\alpha _{2}(u_{1}\rightarrow u_{2})\cdots \alpha _{m}(u_{m-1}\rightarrow x)}. Claim—For a history(u0→⋯→um){\displaystyle (u_{0}\rightarrow \cdots \rightarrow u_{m})}. The transition amplitude of the history is computable in polynomial time. Each gategj{\displaystyle g_{j}}can be decomposed intogj=I⊗g~j{\displaystyle g_{j}=I\otimes {\tilde {g}}_{j}}for some unitary operatorg~j{\displaystyle {\tilde {g}}_{j}}acting on two qubits, which without loss of generality can be taken to be the first two. Hence,⟨v|gj|u⟩=⟨v1,v2|g~j|u1,u2⟩⟨v3,⋯,vn|u3,⋯,un⟩{\displaystyle \langle v|g_{j}|u\rangle =\langle v_{1},v_{2}|{\tilde {g}}_{j}|u_{1},u_{2}\rangle \langle v_{3},\cdots ,v_{n}|u_{3},\cdots ,u_{n}\rangle }which can be computed in polynomial time inn. Sincemis polynomial inn, the transition amplitude of the history can be computed in polynomial time. Claim—LetC|0⟩⊗n=∑x∈{0,1}nαx|x⟩{\displaystyle C|0\rangle ^{\otimes n}=\sum _{x\in \{0,1\}^{n}}\alpha _{x}|x\rangle }be the final state of the quantum circuit. For somex∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}, the amplitudeαx{\displaystyle \alpha _{x}}can be computed byαx=∑h=(|0⟩⊗n→u1→⋯→ut−1→|x⟩)αh{\displaystyle \alpha _{x}=\sum _{h=(|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{t-1}\rightarrow |x\rangle )}\alpha _{h}}. We haveαx=⟨x|C|0⟩⊗n=⟨x|gtgt−1⋯g1|C|0⟩⊗n{\displaystyle \alpha _{x}=\langle x|C|0\rangle ^{\otimes n}=\langle x|g_{t}g_{t-1}\cdots g_{1}|C|0\rangle ^{\otimes n}}. The result comes directly by insertingI=∑x∈{0,1}n|x⟩⟨x|{\displaystyle I=\sum _{x\in \{0,1\}^{n}}|x\rangle \langle x|}betweeng1,g2{\displaystyle g_{1},g_{2}}, andg2,g3{\displaystyle g_{2},g_{3}}, and so on, and then expand out the equation. Then each term corresponds to aαh{\displaystyle \alpha _{h}}, whereh=(|0⟩⊗n→u1→⋯→ut−1→|x⟩){\displaystyle h=(|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{t-1}\rightarrow |x\rangle )} Claim—APPROX-QCIRCUIT-PROB∈PSPACE{\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {PSPACE}}} Notice in the sum over histories algorithm to compute some amplitudeαx{\displaystyle \alpha _{x}}, only one history is stored at any point in the computation. Hence, the sum over histories algorithm usesO(nm){\displaystyle O(nm)}space to computeαx{\displaystyle \alpha _{x}}for anyxsinceO(nm){\displaystyle O(nm)}bits are needed to store the histories in addition to some workspace variables. Therefore, in polynomial space, we may compute∑x|αx|2{\displaystyle \sum _{x}|\alpha _{x}|^{2}}over allxwith the first qubit being1, which is the probability that the first qubit is measured to be 1 by the end of the circuit. Notice that compared with the simulation given for the proof thatBQP⊆EXP{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {EXP}}}, our algorithm here takes far less space but far more time instead. In fact it takesO(m⋅2mn){\displaystyle O(m\cdot 2^{mn})}time to calculate a single amplitude! A similar sum-over-histories argument can be used to show thatBQP⊆PP{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {PP}}}.[14] We knowP⊆BQP{\displaystyle {\mathsf {P}}\subseteq {\mathsf {BQP}}}, since every classical circuit can be simulated by a quantum circuit.[15] It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture:
https://en.wikipedia.org/wiki/BQP
Uncontrolled format stringis a type ofcode injectionvulnerabilitydiscovered around 1989 that can be used insecurity exploits.[1]Originally thought harmless, format string exploits can be used tocrasha program or to execute harmful code. The problem stems from the use ofunchecked user inputas theformat stringparameter in certainCfunctions that perform formatting, such asprintf(). A malicious user may use the%sand%xformat tokens, among others, to print data from thecall stackor possibly other locations in memory. One may also write arbitrary data to arbitrary locations using the%nformat token, which commandsprintf()and similar functions to write the number of bytes formatted to an address stored on the stack. A typical exploit uses a combination of these techniques to take control of theinstruction pointer(IP) of a process,[2]for example by forcing a program to overwrite the address of a library function or the return address on the stack with a pointer to some maliciousshellcode. The padding parameters to format specifiers are used to control the number of bytes output and the%xtoken is used to pop bytes from the stack until the beginning of the format string itself is reached. The start of the format string is crafted to contain the address that the%nformat token can then overwrite with the address of the malicious code to execute. This is a common vulnerability because format bugs were previously thought harmless and resulted in vulnerabilities in many common tools.MITRE'sCVEproject lists roughly 500 vulnerable programs as of June 2007, and a trend analysis ranks it the 9th most-reported vulnerability type between 2001 and 2006.[3] Format string bugs most commonly appear when a programmer wishes to output a string containing user supplied data (either to a file, to a buffer, or to the user). The programmer may mistakenly writeprintf(buffer)instead ofprintf("%s", buffer). The first version interpretsbufferas a format string, and parses any formatting instructions it may contain. The second version simply prints a string to the screen, as the programmer intended. Both versions behave identically in the absence of format specifiers in the string, which makes it easy for the mistake to go unnoticed by the developer. Format bugs arise because C's argument passing conventions are nottype-safe. In particular, thevarargsmechanism allowsfunctionsto accept any number of arguments (e.g.printf) by "popping" as manyargumentsoff thecall stackas they wish, trusting the early arguments to indicate how many additional arguments are to be popped, and of what types. Format string bugs can occur in other programming languages besides C, such as Perl, although they appear with less frequency and usually cannot be exploited to execute code of the attacker's choice.[4] Format bugs were first noted in 1989 by thefuzz testingwork done at the University of Wisconsin, which discovered an "interaction effect" in theC shell(csh) between itscommand historymechanism and an error routine that assumed safe string input.[5] The use of format string bugs as anattack vectorwas discovered in September 1999 byTymm Twillmanduring asecurity auditof theProFTPDdaemon.[6]The audit uncovered ansnprintfthat directly passed user-generated data without a format string. Extensive tests with contrived arguments to printf-style functions showed that use of this for privilege escalation was possible. This led to the first posting in September 1999 on theBugtraqmailing list regarding this class of vulnerabilities, including a basic exploit.[6]It was still several months, however, before the security community became aware of the full dangers of format string vulnerabilities as exploits for other software using this method began to surface. The first exploits that brought the issue to common awareness (by providing remote root access via code execution) were published simultaneously on theBugtraqlist in June 2000 byPrzemysław Frasunek[7]and a person using the nicknametf8.[8]They were shortly followed by an explanation, posted by a person using the nicknamelamagra.[9]"Format bugs" was posted to theBugtraqlist by Pascal Bouchareine in July 2000.[10]The seminal paper "Format String Attacks"[11]byTim Newshamwas published in September 2000 and other detailed technical explanation papers were published in September 2001 such asExploiting Format String Vulnerabilities, by teamTeso.[2] Many compilers can statically check format strings and produce warnings for dangerous or suspect formats. Inthe GNU Compiler Collection, the relevant compiler flags are,-Wall,-Wformat,-Wno-format-extra-args,-Wformat-security,-Wformat-nonliteral, and-Wformat=2.[12] Most of these are only useful for detecting bad format strings that are known at compile-time. If the format string may come from the user or from a source external to the application, the application must validate the format string before using it. Care must also be taken if the application generates or selects format strings on the fly. If the GNU C library is used, the-D_FORTIFY_SOURCE=2parameter can be used to detect certain types of attacks occurring at run-time. The-Wformat-nonliteralcheck is more stringent. Contrary to many other security issues, the root cause of format string vulnerabilities is relatively easy to detect in x86-compiled executables: Forprintf-family functions, proper use implies a separate argument for the format string and the arguments to be formatted. Faulty uses of such functions can be spotted by simply counting the number of arguments passed to the function; an "argument deficiency"[2]is then a strong indicator that the function was misused. Counting the number of arguments is often made easy on x86 due to a calling convention where the caller removes the arguments that were pushed onto the stack by adding to the stack pointer after the call, so a simple examination of the stack correction yields the number of arguments passed to theprintf-family function.'[2]
https://en.wikipedia.org/wiki/Uncontrolled_format_string
Communicationssecurityis the discipline of preventing unauthorized interceptors from accessingtelecommunications[1]in an intelligible form, while still delivering content to the intended recipients. In theNorth Atlantic Treaty Organizationculture, including United States Department of Defense culture, it is often referred to by the abbreviationCOMSEC. The field includes cryptographic security,transmission security, emissions security andphysical securityof COMSEC equipment and associated keying material. COMSEC is used to protect bothclassifiedandunclassifiedtraffic onmilitary communicationsnetworks, including voice, video, and data. It is used for both analog and digital applications, and both wired and wireless links. Voice over secure internet protocolVOSIPhas become the de facto standard for securing voice communication, replacing the need forSecure Terminal Equipment(STE) in much of NATO, including the U.S.A.USCENTCOMmoved entirely to VOSIP in 2008.[2] Types of COMSEC equipment: TheElectronic Key Management System(EKMS) is aUnited States Department of Defense(DoD) key management, COMSEC material distribution, and logistics support system. TheNational Security Agency(NSA) established the EKMS program to supply electronic key to COMSEC devices in securely and timely manner, and to provide COMSEC managers with an automated system capable of ordering, generation, production, distribution, storage, security accounting, and access control. The Army's platform in the four-tiered EKMS, AKMS, automates frequency management and COMSEC management operations. It eliminates paper keying material, hardcopySignal operating instructions(SOI) and saves the time and resources required for courier distribution. It has 4 components: KMI is intended to replace the legacy Electronic Key Management System to provide a means for securely ordering, generating, producing, distributing, managing, and auditing cryptographic products (e.g., asymmetric keys, symmetric keys, manual cryptographic systems, and cryptographic applications).[4]This system is currently being fielded by Major Commands and variants will be required for non-DoD Agencies with a COMSEC Mission.[5]
https://en.wikipedia.org/wiki/Communications_security
Incontract bridge,card reading(orcounting the hand) is the process of inferring which remaining cards are held by each opponent. The reading is based on information gained in the bidding and the play to previous tricks.[1]The technique is used by the declarer and defenders primarily to determine the probablesuitdistribution andhonorcard holdings of each unseenhand; determination of the location of specific spot-cards may be critical as well. Card reading is based on the fact that there are thirteen cards in each of four suits and thirteen cards in each of four hands. There are some basic tips: The advanced tips include: As a declarer, an efficient way of counting thetrump cardsis: instead of counting the number of trump rounds and cards trumped in,count the number of trumps in the opponents' hands. Once the dummy hand appears, calculate the number of trumps which the opponents have, then reduce this number mentally as they are played from the opponents' hands. This means keeping track of one small number, and your own trumps do not enter the calculation. An even better way of counting trumps is to get familiar with common distribution patterns. For example, 5-3 and 4-4 are among the most common trump distributions on the declarer and dummy's hands. In cases, if an opponent shows out on the second trump round, then 5-3-1 or 4-4-1 is known, and the pattern 5-3-4-1 or 4-4-4-1 comes up automatically, and the other defender is known to have begun with four.
https://en.wikipedia.org/wiki/Card_reading_(bridge)
Network Security Toolkit(NST) is aLinux-based LiveDVD/USB Flash Drivethat provides a set offree and open-sourcecomputer securityandnetworkingtools to perform routine security and networking diagnostic and monitoring tasks. The distribution can be used as a network security analysis, validation and monitoring tool on servers hostingvirtual machines. The majority of tools published in the article "Top 125 security tools" byInsecure.orgare available in the toolkit. NST has package management capabilities similar toFedoraand maintains its own repository of additional packages. Many tasks that can be performed within NST are available through aweb interfacecalled NST WUI.[1]Among the tools that can be used through this interface arenmapwith the vizualization tool ZenMap,ntop, a Network Interface Bandwidth Monitor, a Network Segment ARP Scanner, a session manager forVNC, aminicom-based terminal server,serial portmonitoring, andWPAPSKmanagement. Other features include visualization ofntopng,ntop,wireshark,traceroute,NetFlowandkismetdata bygeolocatingthe host addresses, IPv4 Address conversation,traceroutedata andwireless access pointsand displaying them viaGoogle Earthor aMercator World Mapbit image, a browser-based packet capture and protocol analysis system capable of monitoring up to fournetwork interfacesusingWireshark, as well as aSnort-basedintrusion detection systemwith a "collector" backend that stores incidents in aMySQLdatabase.[2]For web developers, there is also aJavaScriptconsole with a built-inobject librarywith functions that aid the development ofdynamic web pages. The following examplentophost geolocation images were generated by NST. The following image depicts theinteractivedynamicSVG/AJAXenabled Network Interface Bandwidth Monitor which is integrated into the NST WUI. Also shown is aRuler Measurementtool overlay to perform time and bandwidth rate analysis.
https://en.wikipedia.org/wiki/Network_Security_Toolkit
Thefunction pointis a "unit of measurement" to express the amount of business functionality aninformation system(as a product) provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost (in dollars or hours) of a single unit is calculated from past projects.[1] There are several recognized standards and/or public specifications for sizing software based on Function Point. 1. ISO Standards The first five standards are implementations of the over-arching standard forFunctional Size MeasurementISO/IEC 14143.[2]The OMG Automated Function Point (AFP) specification, led by theConsortium for IT Software Quality, provides a standard for automating the Function Point counting according to the guidelines of the International Function Point User Group (IFPUG) However, the current implementations of this standard have a limitation in being able to distinguish External Output (EO) from External Inquiries (EQ) out of the box, without some upfront configuration.[3] Function points were defined in 1979 inMeasuring Application Development Productivityby Allan J. Albrecht atIBM.[4]Thefunctional user requirementsof the software are identified and each one is categorized into one of five types: outputs, inquiries, inputs, internal files, and external interfaces. Once the function is identified and categorized into a type, it is then assessed for complexity and assigned a number of function points. Each of these functional user requirements maps to an end-user business function, such as a data entry for an Input or a user query for an Inquiry. This distinction is important because it tends to make the functions measured in function points map easily into user-oriented requirements, but it also tends to hide internal functions (e.g. algorithms), which also require resources to implement. There is currently no ISO recognized FSM Method that includes algorithmic complexity in the sizing result. Recently there have been different approaches proposed to deal with this perceived weakness, implemented in several commercial software products. The variations of the Albrecht-based IFPUG method designed to make up for this (and other weaknesses) include: The use of function points in favor of lines of code seek to address several additional issues: Albrecht observed in his research that Function Points were highly correlated to lines of code,[9]which has resulted in a questioning of the value of such a measure if a more objective measure, namely counting lines of code, is available. In addition, there have been multiple attempts to address perceived shortcomings with the measure by augmenting the counting regimen.[10][11][12][13][14][15]Others have offered solutions to circumvent the challenges by developing alternative methods which create a proxy for the amount of functionality delivered.[16]
https://en.wikipedia.org/wiki/Function_points
Incomputer security, athreatis a potential negative action or event enabled by avulnerabilitythat results in an unwanted impact to a computer system or application. A threat can be either a negative "intentional" event (i.e. hacking: an individual cracker or a criminal organization) or an "accidental" negative event (e.g. the possibility of a computer malfunctioning, or the possibility of anatural disasterevent such as anearthquake, afire, or atornado) or otherwise a circumstance, capability, action, or event (incidentis often used as a blanket term).[1]Athreat actorwho is an individual or group that can perform the threat action, such as exploiting a vulnerability to actualise a negative impact. Anexploitis a vulnerability that a threat actor used to cause an incident. A more comprehensive definition, tied to anInformation assurancepoint of view, can be found in "Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems" byNISTofUnited States of America[2] National Information Assurance Glossarydefinesthreatas: ENISAgives a similar definition:[3] The Open Groupdefinesthreatas:[4] Factor analysis of information riskdefinesthreatas:[5] National Information Assurance Training and Education Centergives a more articulated definition ofthreat:[6][7] The term "threat" relates to some other basic security terms as shown in the following diagram:[1]A resource (both physical or logical) can have one or more vulnerabilities that can be exploited by a threat agent in a threat action. The result can potentially compromise theconfidentiality,integrityoravailabilityproperties of resources (potentially different than the vulnerable one) of the organization and others involved parties (customers, suppliers).The so-calledCIA triadis the basis ofinformation security. Theattackcan beactivewhen it attempts to alter system resources or affect their operation: so it compromises Integrity or Availability. A "passive attack" attempts to learn or make use of information from the system but does not affect system resources: so it compromises Confidentiality.[1] OWASP(see figure) depicts the same phenomenon in slightly different terms: a threat agent through an attack vector exploits a weakness (vulnerability) of the system and the relatedsecurity controlscausing a technical impact on an IT resource (asset) connected to a business impact. A set of policies concerned with information security management, theInformation security management systems(ISMS), has been developed to manage, according torisk managementprinciples, thecountermeasuresin order to accomplish to a security strategy set up following rules and regulations applicable in a country. Countermeasures are also called security controls; when applied to the transmission of information are namedsecurity services.[8] The overall picture represents therisk factorsof the risk scenario.[9] The widespread of computer dependencies and the consequent raising of the consequence of a successful attack, led to a new termcyberwarfare. Nowadays the many real attacks exploitPsychologyat least as much as technology.PhishingandPretextingand other methods are calledsocial engineeringtechniques.[10]TheWeb 2.0applications, specificallySocial network services, can be a mean to get in touch with people in charge of system administration or even system security, inducing them to reveal sensitive information.[11]One famous case isRobin Sage.[12] The most widespread documentation oncomputer insecurityis about technical threats such as acomputer virus,trojanand othermalware, but a serious study to apply cost effective countermeasures can only be conducted following a rigorousIT risk analysisin the framework of an ISMS: a pure technical approach will let out the psychological attacks that are increasing threats. Threats can be classified according to their type and origin:[13] Note that a threat type can have multiple origins. Recent trends in computer threats show an increase in ransomware attacks, supply chain attacks, and fileless malware. Ransomware attacks involve the encryption of a victim's files and a demand for payment to restore access. Supply chain attacks target the weakest links in a supply chain to gain access to high-value targets. Fileless malware attacks use techniques that allow malware to run in memory, making it difficult to detect.[14] Below are the few common emerging threats: Microsoftpublished a mnemonic,STRIDE,[15]from the initials of threat groups: Microsoft previously rated the risk of security threats using five categories in a classification calledDREAD: Risk assessment model. The model is considered obsolete by Microsoft. The categories were: The DREAD name comes from the initials of the five categories listed. The spread over a network of threats can lead to dangerous situations. In military and civil fields, threat level has been defined: for exampleINFOCONis a threat level used by the US. Leadingantivirus softwarevendors publish global threat level on their websites.[16][17] The termThreat Agentis used to indicate an individual or group that can manifest a threat. It is fundamental to identify who would want to exploit the assets of a company, and how they might use them against the company.[18] Individuals within a threat population; Practically anyone and anything can, under the right circumstances, be a threat agent – the well-intentioned, but inept, computer operator who trashes a daily batch job by typing the wrong command, the regulator performing an audit, or the squirrel that chews through a data cable.[5] Threat agents can take one or more of the following actions against an asset:[5] Each of these actions affects different assets differently, which drives the degree and nature of loss. For example, the potential for productivity loss resulting from a destroyed or stolen asset depends upon how critical that asset is to the organization's productivity. If a critical asset is simply illicitly accessed, there is no direct productivity loss. Similarly, the destruction of a highly sensitive asset that does not play a critical role in productivity would not directly result in a significant productivity loss. Yet that same asset, if disclosed, can result in significant loss of competitive advantage or reputation, and generate legal costs. The point is that it is the combination of the asset and type of action against the asset that determines the fundamental nature and degree of loss. Which action(s) a threat agent takes will be driven primarily by that agent's motive (e.g., financial gain, revenge, recreation, etc.) and the nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a critical server than they are to steal an easilypawnedasset like a laptop.[5] It is important to separate the concept of the event that a threat agent get in contact with the asset (even virtually, i.e. through the network) and the event that a threat agent act against the asset.[5] OWASP collects a list of potential threat agents to prevent system designers, and programmers insert vulnerabilities in the software.[18] Threat Agent = Capabilities + Intentions + Past Activities These individuals and groups can be classified as follows:[18] Threat sources are those who wish a compromise to occur. It is a term used to distinguish them from threat agents/actors who are those who carry out the attack and who may be commissioned or persuaded by the threat source to knowingly or unknowingly carry out the attack.[19] Threat actionis an assault on system security.A completesecurity architecturedeals with both intentional acts (i.e. attacks) and accidental events.[20] Various kinds of threat actions are defined as subentries under "threat consequence". Threat analysisis the analysis of the probability of occurrences and consequences of damaging actions to a system.[1]It is the basis ofrisk analysis. Threat modelingis a process that helps organizations identify and prioritize potential threats to their systems. It involves analyzing the system's architecture, identifying potential threats, and prioritizing them based on their impact and likelihood. By using threat modeling, organizations can develop a proactive approach to security and prioritize their resources to address the most significant risks.[21] Threat intelligenceis the practice of collecting and analyzing information about potential and current threats to an organization. This information can include indicators of compromise, attack techniques, and threat actor profiles. By using threat intelligence, organizations can develop a better understanding of the threat landscape and improve their ability to detect and respond to threats.[22] Threat consequenceis a security violation that results from a threat action.[1]Includes disclosure, deception, disruption, and usurpation. The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat actions that cause each consequence.[1]Threat actions that are accidental events are marked by "*". A collection of threats in a particular domain or context, with information on identified vulnerable assets, threats, risks, threat actors and observed trends.[23][24] Threats should be managed by operating an ISMS, performing all theIT risk managementactivities foreseen by laws, standards and methodologies. Very large organizations tend to adoptbusiness continuity managementplans in order to protect, maintain and recover business-critical processes and systems. Some of these plans are implemented bycomputer security incident response team(CSIRT). Threat management must identify, evaluate, and categorize threats. There are two primary methods ofthreat assessment: Many organizations perform only a subset of these methods, adopting countermeasures based on a non-systematic approach, resulting incomputer insecurity. Informationsecurity awarenessis a significant market. There has been a lot of software developed to deal with IT threats, including bothopen-source softwareandproprietary software.[25] Threat management involves a wide variety of threats including physical threats like flood and fire. While ISMS risk assessment process does incorporate threat management for cyber threats such as remote buffer overflows the risk assessment process doesn't include processes such as threat intelligence management or response procedures. Cyber threat management (CTM) is emerging as the best practice for managing cyber threats beyond the basic risk assessment found in ISMS. It enables early identification of threats, data-driven situational awareness, accurate decision-making, and timely threat mitigating actions.[26] CTM includes: Cyber threat huntingis "the process of proactively and iteratively searching through networks to detect and isolate advanced threats that evade existing security solutions."[27]This is in contrast to traditional threat management measures, such asfirewalls,intrusion detection systems, andSIEMs, which typically involve an investigationafterthere has been a warning of a potential threat, or an incident has occurred. Threat hunting can be a manual process, in which a security analyst sifts through various data information using their knowledge and familiarity with the network to create hypotheses about potential threats. To be even more effective and efficient, however, threat hunting can be partially automated, or machine-assisted, as well. In this case, the analyst utilizes software that harnessesmachine learninganduser and entity behaviour analytics(UEBA) to inform the analyst of potential risks. The analyst then investigates these potential risks, tracking suspicious behaviour in the network. Thus hunting is an iterative process, meaning that it must be continuously carried out in a loop, beginning with a hypothesis. There are three types of hypotheses: The analyst researches their hypothesis by going through vast amounts of data about the network. The results are then stored so that they can be used to improve the automated portion of the detection system and to serve as a foundation for future hypotheses. TheSANS Institutehas conducted research and surveys on the effectiveness of threat hunting to track and disrupt cyber adversaries as early in their process as possible. According to a survey performed in 2019, "61% [of the respondents] report at least an 11% measurable improvement in their overall security posture" and 23.6% of the respondents have experienced a 'significant improvement' in reducing thedwell time.[29] To protect yourself from computer threats, it's essential to keep your software up-to-date, use strong and unique passwords, and be cautious when clicking on links or downloading attachments. Additionally, using antivirus software and regularly backing up your data can help mitigate the impact of a threat.
https://en.wikipedia.org/wiki/Threat_(computer)
Inquantum information theory,quantum mutual information, orvon Neumann mutual information, afterJohn von Neumann, is a measure of correlation between subsystems of quantum state. It is the quantum mechanical analog ofShannonmutual information. For simplicity, it will be assumed that all objects in the article are finite-dimensional. The definition of quantum mutual entropy is motivated by the classical case. For a probability distribution of two variablesp(x,y), the two marginal distributions are The classical mutual informationI(X:Y) is defined by whereS(q) denotes theShannon entropyof the probability distributionq. One can calculate directly So the mutual information is Where the logarithm is taken in basis 2 to obtain the mutual information inbits. But this is precisely therelative entropybetweenp(x,y) andp(x)p(y). In other words, if we assume the two variablesxandyto be uncorrelated, mutual information is thediscrepancy in uncertaintyresulting from this (possibly erroneous) assumption. It follows from the property of relative entropy thatI(X:Y) ≥ 0 and equality holds if and only ifp(x,y) =p(x)p(y). The quantum mechanical counterpart of classical probability distributions are modeled withdensity matrices. Consider a quantum system that can be divided into two parts, A and B, such that independent measurements can be made on either part. The state space of the entire quantum system is then thetensor product of the spaces for the two parts. LetρABbe a density matrix acting on states inHAB. Thevon Neumann entropyof a density matrix S(ρ), is the quantum mechanical analogy of the Shannon entropy. For a probability distributionp(x,y), the marginal distributions are obtained by integrating away the variablesxory. The corresponding operation for density matrices is thepartial trace. So one can assign toρa state on the subsystemAby where TrBis partial trace with respect to systemB. This is thereduced stateofρABon systemA. Thereduced von Neumann entropyofρABwith respect to systemAis S(ρB) is defined in the same way. It can now be seen that the definition of quantum mutual information, corresponding to the classical definition, should be as follows. Quantum mutual information can be interpreted the same way as in the classical case: it can be shown that whereS(⋅‖⋅){\displaystyle S(\cdot \|\cdot )}denotesquantum relative entropy. Note that there is an alternative generalization of mutual information to the quantum case. The difference between the two for a given state is calledquantum discord, a measure for the quantum correlations of the state in question. When the stateρAB{\displaystyle \rho ^{AB}}is pure (and thusS(ρAB)=0{\displaystyle S(\rho ^{AB})=0}), the mutual information is twice theentanglement entropyof the state: A positive quantum mutual information is not necessarily indicative of entanglement, however. A classical mixture ofseparable stateswill always have zero entanglement, but can have nonzero QMI, such as In this case, the state is merely aclassically correlatedstate.
https://en.wikipedia.org/wiki/Quantum_mutual_information
TheIntel 8008("eight-thousand-eight" or "eighty-oh-eight") is an early8-bitmicroprocessor capable of addressing 16 KB of memory, introduced in April 1972. The 8008 architecture was designed byComputer Terminal Corporation(CTC) and was implemented and manufactured byIntel. While the 8008 was originally designed for use in CTC'sDatapoint 2200programmable terminal, an agreement between CTC and Intel permitted Intel to market the chip to other customers afterSeikoexpressed an interest in using it for acalculator. In order to address several issues with theDatapoint 3300, including excessive heat radiation,Computer Terminal Corporation(CTC) designed the architecture of the 3300's planned successor with a CPU as part of the internal circuitry re-implemented on a single chip. Looking for a company able to produce their chip design, CTC co-founder Austin O. "Gus" Roche turned to Intel, then primarily a vendor of memory chips.[3]Roche met withBob Noyce, who expressed concern with the concept;John Frassanitorecalls that: "Noyce said it was an intriguing idea, and that Intel could do it, but it would be a dumb move. He said that if you have a computer chip, you can only sell one chip per computer, while with memory, you can sell hundreds of chips per computer."[3] Another major concern was that Intel's existing customer base purchased their memory chips for use with their own processor designs; if Intel introduced their own processor, they might be seen as a competitor, and their customers might look elsewhere for memory. Nevertheless, Noyce agreed to a US$50,000 development contract in early 1970 (equivalent to $405,000 in 2024).Texas Instruments(TI) was also brought in as a second supplier.[citation needed] In December 1969, Intel engineerStan Mazorand a representative of CTC met to discuss options for the logic chipset to power a new CTC business terminal. Mazor, who had been working withTed Hoffon the development of theIntel 4004, proposed that a one-chip programmable microprocessor might be less cumbersome and ultimately more cost effective than building a custom logic chipset. CTC agreed and development work began on the chip, which at the time was known as the 1201.[4] TI was able to make samples of the 1201 based on Intel drawings, calling it the TMX 1795. These proved to be buggy and were rejected.[5]Intel's own versions were delayed. CTC decided to re-implement the new version of the terminal usingserialdiscreteTTLinstead of waiting for a single-chip CPU. The new system was released as theDatapoint 2200in the spring of 1970, with their first sale toGeneral Millson 25 May 1970.[3]CTC paused development of the 1201 after the 2200 was released, as it was no longer needed. Later in early 1971, Seiko approached Intel, expressing an interest in using the 1201 in a scientific calculator, likely after seeing the success of the simpler 4004 used by Busicom in their business calculators.[4]A small re-design followed, under the leadership ofFederico Faggin, the designer of the4004, now project leader of the 1201, expanding from a 16-pin to 18-pin design, and the new 1201 was delivered to CTC in late 1971.[3] By that point, CTC had once again moved on, this time to the parallel-architectureDatapoint 2200 II, which was faster than the 1201. CTC voted to end their involvement with the 1201, leaving the design's intellectual property to Intel instead of paying the $50,000 contract. Intel renamed it the 8008 and put it in their catalog in April 1972 priced at US$120 (equivalent to $902 in 2024). This renaming tried to ride off the success of the 4004 chip, by presenting the 8008 as simply a 4 to 8 port, but the 8008 isnotbased on the4004.[6]The 8008 went on to be a commercially successful design. This was followed by the popularIntel 8080, and then the hugely successfulIntel x86family.[3] One of the first teams to build a complete system around the 8008 was Bill Pentz's team atCalifornia State University, Sacramento. TheSac State 8008was possibly the first true microcomputer, with a disk operating system built withIBM Basic assembly languagein PROM,[disputed–discuss]all driving a color display, hard drive, keyboard, modem, audio/paper tape reader, and printer.[7]The project started in the spring of 1972, and with key help fromTektronix, the system was fully functional a year later. In the UK, a team at S. E. Laboratories Engineering (EMI) led by Tom Spink in 1972 built a microcomputer based on a pre-release sample of the 8008. Joe Hardman extended the chip with an external stack. This, among other things, gave it power-fail save and recovery. Joe also developed a direct screen printer. The operating system was written using a meta-assembler developed by L. Crawford and J. Parnell for aDigital Equipment CorporationPDP-11.[8]The operating system was burnt into a PROM. It was interrupt-driven, queued, and based on a fixed page size for programs and data. An operational prototype was prepared for management, who decided not to continue with the project.[citation needed] The 8008 was the CPU for the very first commercial non-calculatorpersonal computers(excluding the Datapoint 2200 itself): the USSCELBIkit and the pre-built FrenchMicral Nand CanadianMCM/70. It was also the controlling microprocessor for the first several models in Hewlett-Packard's2640family of computer terminals.[citation needed] In 1973, Intel offered aninstruction set simulatorfor the 8008 named INTERP/8.[9]It was written inFORTRAN IVbyGary Kildallwhile he worked as a consultant for Intel.[10][11] The 8008 was implemented in 10μmsilicon-gate enhancement-modePMOS logic. Initial versions could work at clock frequencies up to 0.5 MHz. This was later increased in the 8008-1 to a specified maximum of 0.8 MHz. Instructions take between 3 and 11 T-states, where each T-state is 2 clock cycles.[13]Register–register loads and ALU operations take 5T (20 μs at 0.5 MHz), register–memory 8T (32 μs), while calls and jumps (when taken) take 11 T-states (44 μs).[14]The 8008 is a little slower in terms ofinstructions per second(36,000 to 80,000 at 0.8 MHz) than the 4-bitIntel 4004andIntel 4040.[15]but since the 8008 processes data 8 bits at a time and can access significantly more RAM, in most applications it has a significant speed advantage over these processors. The 8008 has 3,500transistors.[16][17][18] The chip, limited by its 18-pinDIP, has a single 8-bit bus working triple duty to transfer 8 data bits, 14 address bits, and two status bits. The small package requires about 30 TTL support chips to interface to memory.[19]For example, the 14-bit address, which can access "16 K × 8 bits of memory", needs to be latched by some of this logic into an external memory address register (MAR). The 8008 can access 8input portsand 24 output ports.[13] For controller andCRT terminaluse, this is an acceptable design, but it is rather cumbersome to use for most other tasks, at least compared to the next generations of microprocessors. A few early computer designs were based on it, but most would use the later and greatly improvedIntel 8080instead.[citation needed] The subsequent 40-pinNMOSIntel 8080expanded upon the 8008 registers and instruction set and implements a more efficient external bus interface (using the 22 additional pins). Despite a close architectural relationship, the 8080 was not made binary compatible with the 8008, so an 8008 program would not run on an 8080. However, as two different assembly syntaxes were used by Intel at the time, the 8080 could be used in an 8008 assembly-language backward-compatible fashion. TheIntel 8085is an electrically modernized version of the 8080 that usesdepletion-modetransistors and also added two new instructions. TheIntel 8086, the original x86 processor, is a non-strict extension of the 8080, so it loosely resembles the original Datapoint 2200 design as well. Almost every Datapoint 2200 and 8008 instruction has an equivalent not only in the instruction set of the 8080, 8085, andZ80, but also in the instruction set of modernx86processors (although the instruction encodings are different). The 8008 architecture includes the following features:[citation needed] Instructions are all one to three bytes long, consisting of an initial opcode byte, followed by up to two bytes of operands which can be an immediate operand or a program address. Instructions operate on 8-bits only; there are no 16-bit operations. There is only one mechanism to address data memory: indirect addressing pointed to by a concatenation of the H and L registers, referenced as M. The 8008 does, however, support 14-bit program addresses. It has automatic CAL and RET instructions for multi-level subroutine calls and returns which can be conditionally executed, like jumps. Eight one-byte call instructions (RST) for subroutines exist at the fixed addresses 00h, 08h, 10h, ..., 38h. These are intended to be supplied by external hardware in order to invoke interrupt service routines, but can employed as fast calls. Direct copying may be made between any two registers or a register and memory. Eight math/logic functions are supported between the accumulator (A) and any register, memory, or an immediate value. Results are always deposited in A. Increments and decrements are supported for most registers but, curiously, not A. Register A does, however, support four different rotate instructions. All instructions are executed in 3 to 11 states. Each state requires two clocks. The following 8008assemblysource code is for a subroutine namedMEMCPYthat copies a block of data bytes of a given size from one location to another. Intel's 8008 assembler supported only + and - operators. This example borrows the 8080's assembler AND and SHR (shift right) operators to select the low and high bytes of a 14-bit address for placement into the 8 bit registers. A contemporaneous 8008 programmer was expected to calculate the numbers and type them in for the assembler. In the code above, all values are given in octal. LocationsSRC,DST, andCNTare 16-bit parameters for the subroutine namedMEMCPY. In actuality, only 14 bits of the values are used, since the CPU has only a 14-bit addressable memory space. The values are stored inlittle-endianformat, although this is an arbitrary choice, since the CPU is incapable of reading or writing more than a single byte into memory at a time. Since there is no instruction to load a register directly from a given memory address, the HL register pair must first be loaded with the address, and the target register can then be loaded from the M operand, which is an indirect load from the memory location in the HL register pair. The BC register pair is loaded with theCNTparameter value and decremented at the end of the loop until it becomes zero. Note that most of the instructions used occupy a single 8-bit opcode. The following 8008 assembly source code is for a simplified subroutine named MEMCPY2 that copies a block of data bytes from one location to another. By reducing the byte counter to 8 bits, there is enough room to load all the subroutine parameters into the 8008's register file. Interruptson the 8008 are only partially implemented. After the INT line is asserted, the 8008 acknowledges the interrupt by outputting a state code of S0,S1,S2 = 011 at T1I time. At the subsequent instruction fetch cycle, an instruction is "jammed" (Intel's word) by external hardware on the bus. Typically this is a one-byte RST instruction. At this point, there is a problem. The 8008 has no provision to save itsarchitectural state. The 8008 can only write to memory via an address in the HL register pair. When interrupted, there is no mechanism to save HL so there is no way to save the other registers and flags via HL. Because of this, some sort of external memory device such as a hardwarestackor a pair of read/writeregistersmust be attached to the 8008 via the I/O ports to help save the state of the 8008.[20]
https://en.wikipedia.org/wiki/INTERP/8
Numeralornumber prefixesareprefixesderived fromnumeralsor occasionally othernumbers. In English and many other languages, they are used to coin numerous series of words. For example: In many European languages there are two principal systems, taken fromLatinandGreek, each with several subsystems; in addition,Sanskritoccupies a marginal position.[B]There is also an international set ofmetric prefixes, which are used in the world'sstandard measurement system. In the following prefixes, a final vowel is normally dropped before a root that begins with a vowel, with the exceptions ofbi-,which is extended tobis-before a vowel; among the othermonosyllables,du-,di-,dvi-, andtri-, never vary. Words in thecardinalcategory arecardinal numbers, such as the Englishone,two,three, which name the count of items in a sequence. Themultiplecategory areadverbialnumbers, like the Englishonce,twice,thrice, that specify the number of events or instances of otherwise identical or similar items. Enumeration with thedistributivecategory originally was meant to specifyone each,two eachorone by one,two by two, etc., giving how many items of each type are desired or had been found, although distinct word forms for that meaning are now mostly lost. Theordinalcategory are based onordinal numberssuch as the Englishfirst,second,third, which specify position of items in a sequence. In Latin and Greek, the ordinal forms are also used for fractions for amounts higher than 2; only the fraction⁠1/2⁠has special forms. The same suffix may be used with more than one category of number, as for example the orginary numbers secondaryand tertiaryand the distributive numbers binaryand ternary. For the hundreds, there are competing forms: Those in-gent-, from the original Latin, and those in-cent-, derived fromcenti-, etc. plus the prefixes for 1 through 9 . Many of the items in the following tables are not in general use, but may rather be regarded as coinages by individuals. In scientific contexts, eitherscientific notationorSI prefixesare used to express very large or very small numbers, and not unwieldy prefixes. (buthybridhexadecimal) Because of the common inheritance of Greek and Latin roots across theRomance languages, the import of much of that derived vocabulary into non-Romance languages (such as intoEnglishviaNorman French), and theborrowingof 19th and 20th century coinages into many languages, the same numerical prefixes occur in many languages. Numerical prefixes are not restricted to denoting integers. Some of the SI prefixes denote negative powers of 10, i.e. division by a multiple of 10 rather than multiplication by it. Several common-use numerical prefixes denotevulgar fractions. Words containing non-technical numerical prefixes are usually not hyphenated. This is not an absolute rule, however, and there are exceptions (for example:quarter-deckoccurs in addition toquarterdeck). There are no exceptions for words comprising technical numerical prefixes, though.Systematic namesand words comprisingSI prefixesand binary prefixes are not hyphenated, by definition. Nonetheless, for clarity, dictionaries list numerical prefixes in hyphenated form, to distinguish the prefixes from words with the same spellings (such asduo-andduo). Several technical numerical prefixes are not derived from words for numbers. (mega-is not derived from a number word, for example.) Similarly, some are only derived from words for numbers inasmuch as they areword play. (Peta-is word play onpenta-, for example. See its etymology for details.) Themetric prefixespeta, exa, zetta, yotta, ronna, and quetta are based on the Ancient Greek or Ancient Latin numbers from 5 to 10, referring to the fifth through tenth powers of1000. The initial letter h has been removed from some of these stems and the initial letters z, y, r, and q have been added, ascending in reverse alphabetical order, to avoid confusion with other metric prefixes. The root language of a numerical prefix need not be related to the root language of the word that it prefixes. Some words comprising numerical prefixes arehybrid words. In certain classes of systematic names, there are a few other exceptions to the rule of using Greek-derived numerical prefixes. TheIUPAC nomenclature of organic chemistry, for example, uses the numerical prefixes derived from Greek, except for the prefix for 9 (as mentioned) and the prefixes from 1 to 4 (meth-, eth-, prop-, and but-), which are not derived from words for numbers. These prefixes were invented by the IUPAC, deriving them from the pre-existing names for several compounds that it was intended to preserve in the new system:methane(viamethyl, which is in turn from the Greek word for wine),ethane(fromethylcoined byJustus von Liebigin 1834),propane(frompropionic, which is in turn frompro-and the Greek word for fat), andbutane(frombutyl, which is in turn frombutyric, which is in turn from the Latin word for butter).
https://en.wikipedia.org/wiki/Numeral_prefix
TheInternational Network for Social Network Analysis(INSNA) is a professionalacademic associationof researchers and practitioners ofsocial network analysis.[1][2] INSNA was founded in 1977 byBarry Wellman, asociologist. A key function of the organization was to provide a sense of identity for a set of researchers who were widely dispersed geographically and across scientific disciplines.[3] Shortly after INSNA was founded,Linton C. Freemanfounded the association's flagship journal,Social Networks, in 1978.[4] Early meetings were invitation-only, but in 1980H. Russell BernardandAlvin Wolfeinaugurated the series of annual "Sunbelt" meetings open to all.[5] A full chronology of INSNA leadership is as follows:[citation needed] As of 2018, INSNA has approximately 1,000 active members, while the SOCNET[6]listserv has about 3700 subscribers.[7] As well as publishing a triannual journalConnectionson the subject, INSNA also:
https://en.wikipedia.org/wiki/International_Network_for_Social_Network_Analysis
Inmathematical optimization,Dantzig'ssimplex algorithm(orsimplex method) is a popularalgorithmforlinear programming.[1] The name of the algorithm is derived from the concept of asimplexand was suggested byT. S. Motzkin.[2]Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicialcones, and these become proper simplices with an additional constraint.[3][4][5][6]The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called apolytope. The shape of this polytope is defined by theconstraintsapplied to the objective function. George Dantzigworked on planning methods for the US Army Air Force during World War II using adesk calculator. During 1946, his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work ofWassily Leontief, however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.[7]Development of the simplex method was evolutionary and happened over a period of about a year.[8] After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems thathe had mistakenas homework in his professorJerzy Neyman's class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence ofLagrange multipliersfor general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form ofLebesgue integrals. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient.[9] The simplex algorithm operates on linear programs in thecanonical form withc=(c1,…,cn){\displaystyle \mathbf {c} =(c_{1},\,\dots ,\,c_{n})}the coefficients of the objective function,(⋅)T{\displaystyle (\cdot )^{\mathrm {T} }}is thematrix transpose, andx=(x1,…,xn){\displaystyle \mathbf {x} =(x_{1},\,\dots ,\,x_{n})}are the variables of the problem,A{\displaystyle A}is ap×nmatrix, andb=(b1,…,bp){\displaystyle \mathbf {b} =(b_{1},\,\dots ,\,b_{p})}. There is a straightforward process to convert any linear program into one in standard form, so using this form of linear programs results in no loss of generality. In geometric terms, thefeasible regiondefined by all values ofx{\displaystyle \mathbf {x} }such thatAx≤b{\textstyle A\mathbf {x} \leq \mathbf {b} }and∀i,xi≥0{\displaystyle \forall i,x_{i}\geq 0}is a (possibly unbounded)convex polytope. An extreme point or vertex of this polytope is known asbasic feasible solution(BFS). It can be shown that for a linear program in standard form, if the objective function has a maximum value on the feasible region, then it has this value on (at least) one of the extreme points.[10]This in itself reduces the problem to a finite computation since there is a finite number of extreme points, but the number of extreme points is unmanageably large for all but the smallest linear programs.[11] It can also be shown that, if an extreme point is not a maximum point of the objective function, then there is an edge containing the point so that the value of the objective function is strictly increasing on the edge moving away from the point.[12]If the edge is finite, then the edge connects to another extreme point where the objective function has a greater value, otherwise the objective function is unbounded above on the edge and the linear program has no solution. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values. This continues until the maximum value is reached, or an unbounded edge is visited (concluding that the problem has no solution). The algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of the objective function), we hope that the number of vertices visited will be small.[12] The solution of a linear program is accomplished in two steps. In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the linear program is calledinfeasible. In the second step, Phase II, the simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above.[13][14][15] The transformation of a linear program to one in standard form may be accomplished as follows.[16]First, for each variable with a lower bound other than 0, a new variable is introduced representing the difference between the variable and bound. The original variable can then be eliminated by substitution. For example, given the constraint a new variable,y1{\displaystyle y_{1}}, is introduced with The second equation may be used to eliminatex1{\displaystyle x_{1}}from the linear program. In this way, all lower bound constraints may be changed to non-negativity restrictions. Second, for each remaining inequality constraint, a new variable, called aslack variable, is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be non-negative. For example, the inequalities are replaced with It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where ≥ appears such as the second one, some authors refer to the variable introduced as asurplus variable. Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the variable in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables. For example, ifz1{\displaystyle z_{1}}is unrestricted then write The equation may be used to eliminatez1{\displaystyle z_{1}}from the linear program. When this process is complete the feasible region will be in the form It is also useful to assume that the rank ofA{\displaystyle \mathbf {A} }is the number of rows. This results in no loss of generality since otherwise either the systemAx=b{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }has redundant equations which can be dropped, or the system is inconsistent and the linear program has no solution.[17] A linear program in standard form can be represented as atableauof the form The first row defines the objective function and the remaining rows specify the constraints. The zero in the first column represents the zero vector of the same dimension as the vectorb{\displaystyle \mathbf {b} }(different authors use different conventions as to the exact layout). If the columns ofA{\displaystyle \mathbf {A} }can be rearranged so that it contains theidentity matrixof orderp{\displaystyle p}(the number of rows inA{\displaystyle \mathbf {A} }) then the tableau is said to be incanonical form.[18]The variables corresponding to the columns of the identity matrix are calledbasic variableswhile the remaining variables are callednonbasicorfree variables. If the values of the nonbasic variables are set to 0, then the values of the basic variables are easily obtained as entries inb{\displaystyle \mathbf {b} }and this solution is a basic feasible solution. The algebraic interpretation here is that the coefficients of the linear equation represented by each row are either0{\displaystyle 0},1{\displaystyle 1}, or some other number. Each row will have1{\displaystyle 1}column with value1{\displaystyle 1},p−1{\displaystyle p-1}columns with coefficients0{\displaystyle 0}, and the remaining columns with some other coefficients (these other variables represent our non-basic variables). By setting the values of the non-basic variables to zero we ensure in each row that the value of the variable represented by a1{\displaystyle 1}in its column is equal to theb{\displaystyle b}value at that row. Conversely, given a basic feasible solution, the columns corresponding to the nonzero variables can be expanded to a nonsingular matrix. If the corresponding tableau is multiplied by the inverse of this matrix then the result is a tableau in canonical form.[19] Let be a tableau in canonical form. Additionalrow-addition transformationscan be applied to remove the coefficientscTBfrom the objective function. This process is calledpricing outand results in a canonical tableau wherezBis the value of the objective function at the corresponding basic feasible solution. The updated coefficients, also known asrelative cost coefficients, are the rates of change of the objective function with respect to the nonbasic variables.[14] The geometrical operation of moving from a basic feasible solution to an adjacent basic feasible solution is implemented as apivot operation. First, a nonzeropivot elementis selected in a nonbasic column. The row containing this element ismultipliedby its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in a rowr, then the column becomes ther-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which corresponded to ther-th column of the identity matrix before the operation. In effect, the variable corresponding to the pivot column enters the set of basic variables and is called theentering variable, and the variable being replaced leaves the set of basic variables and is called theleaving variable. The tableau is still in canonical form but with the set of basic variables changed by one element.[13][14] Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution. Since the entering variable will, in general, increase from 0 to a positive number, the value of the objective function will decrease if the derivative of the objective function with respect to this variable is negative. Equivalently, the value of the objective function is increased if the pivot column is selected so that the corresponding entry in the objective row of the tableau is positive. If there is more than one column so that the entry in the objective row is positive then the choice of which one to add to the set of basic variables is somewhat arbitrary and severalentering variable choice rules[20]such asDevex algorithm[21]have been developed. If all the entries in the objective row are less than or equal to 0 then no choice of entering variable can be made and the solution is in fact optimal. It is easily seen to be optimal since the objective row now corresponds to an equation of the form By changing the entering variable choice rule so that it selects a column where the entry in the objective row is negative, the algorithm is changed so that it finds the minimum of the objective function rather than the maximum. Once the pivot column has been selected, the choice of pivot row is largely determined by the requirement that the resulting solution be feasible. First, only positive entries in the pivot column are considered since this guarantees that the value of the entering variable will be nonnegative. If there are no positive entries in the pivot column then the entering variable can take any non-negative value with the solution remaining feasible. In this case the objective function is unbounded below and there is no minimum. Next, the pivot row must be selected so that all the other basic variables remain positive. A calculation shows that this occurs when the resulting value of the entering variable is at a minimum. In other words, if the pivot column isc, then the pivot rowris chosen so that is the minimum over allrso thatarc> 0. This is called theminimum ratio test.[20]If there is more than one row for which the minimum is achieved then adropping variable choice rule[22]can be used to make the determination. Consider the linear program With the addition of slack variablessandt, this is represented by the canonical tableau where columns 5 and 6 represent the basic variablessandtand the corresponding basic feasible solution is Columns 2, 3, and 4 can be selected as pivot columns, for this example column 4 is selected. The values ofzresulting from the choice of rows 2 and 3 as pivot rows are 10/1 = 10 and 15/3 = 5 respectively. Of these the minimum is 5, so row 3 must be the pivot row. Performing the pivot produces Now columns 4 and 5 represent the basic variableszandsand the corresponding basic feasible solution is For the next step, there are no positive entries in the objective row and in fact so the minimum value ofZis −20. In general, a linear program will not be given in the canonical form and an equivalent canonical tableau must be found before the simplex algorithm can start. This can be accomplished by the introduction ofartificial variables. Columns of the identity matrix are added as column vectors for these variables. If the b value for a constraint equation is negative, the equation is negated before adding the identity matrix columns. This does not change the set of feasible solutions or the optimal solution, and it ensures that the slack variables will constitute an initial feasible solution. The new tableau is in canonical form but it is not equivalent to the original problem. So a new objective function, equal to the sum of the artificial variables, is introduced and the simplex algorithm is applied to find the minimum; the modified linear program is called thePhase Iproblem.[23] The simplex algorithm applied to the Phase I problem must terminate with a minimum value for the new objective function since, being the sum of nonnegative variables, its value is bounded below by 0. If the minimum is 0 then the artificial variables can be eliminated from the resulting canonical tableau producing a canonical tableau equivalent to the original problem. The simplex algorithm can then be applied to find the solution; this step is calledPhase II. If the minimum is positive then there is no feasible solution for the Phase I problem where the artificial variables are all zero. This implies that the feasible region for the original problem is empty, and so the original problem has no solution.[13][14][24] Consider the linear program It differs from the previous example by having equality instead of inequality constraints. The previous solutionx=y=0,z=5{\displaystyle x=y=0\,,z=5}violates the first constraint. This new problem is represented by the (non-canonical) tableau Introduce artificial variablesuandvand objective functionW=u+v, giving a new tableau The equation defining the original objective function is retained in anticipation of Phase II. By construction,uandvare both basic variables since they are part of the initial identity matrix. However, the objective functionWcurrently assumes thatuandvare both 0. In order to adjust the objective function to be the correct value whereu= 10 andv= 15, add the third and fourth rows to the first row giving Select column 5 as a pivot column, so the pivot row must be row 4, and the updated tableau is Now select column 3 as a pivot column, for which row 3 must be the pivot row, to get The artificial variables are now 0 and they may be dropped giving a canonical tableau equivalent to the original problem: This is, fortuitously, already optimal and the optimum value for the original linear program is −130/7. This value is "worse" than -20 which is to be expected for a problem which is more constrained. The tableau form used above to describe the algorithm lends itself to an immediate implementation in which the tableau is maintained as a rectangular (m+ 1)-by-(m+n+ 1) array. It is straightforward to avoid storing the m explicit columns of the identity matrix that will occur within the tableau by virtue ofBbeing a subset of the columns of [A,I]. This implementation is referred to as the "standardsimplex algorithm". The storage and computation overhead is such that the standard simplex method is a prohibitively expensive approach to solving large linear programming problems. In each simplex iteration, the only data required are the first row of the tableau, the (pivotal) column of the tableau corresponding to the entering variable and the right-hand-side. The latter can be updated using the pivotal column and the first row of the tableau can be updated using the (pivotal) row corresponding to the leaving variable. Both the pivotal column and pivotal row may be computed directly using the solutions of linear systems of equations involving the matrixBand a matrix-vector product usingA. These observations motivate the "revisedsimplex algorithm", for which implementations are distinguished by their invertible representation ofB.[25] In large linear-programming problemsAis typically asparse matrixand, when the resulting sparsity ofBis exploited when maintaining its invertible representation, the revised simplex algorithm is much more efficient than the standard simplex method. Commercial simplex solvers are based on the revised simplex algorithm.[24][25][26][27][28] If the values of all basic variables are strictly positive, then a pivot must result in an improvement in the objective value. When this is always the case no set of basic variables occurs twice and the simplex algorithm must terminate after a finite number of steps. Basic feasible solutions where at least one of thebasicvariables is zero are calleddegenerateand may result in pivots for which there is no improvement in the objective value. In this case there is no actual change in the solution but only a change in the set of basic variables. When several such pivots occur in succession, there is no improvement; in large industrial applications, degeneracy is common and such "stalling" is notable. Worse than stalling is the possibility the same set of basic variables occurs twice, in which case, the deterministic pivoting rules of the simplex algorithm will produce an infinite loop, or "cycle". While degeneracy is the rule in practice and stalling is common, cycling is rare in practice. A discussion of an example of practical cycling occurs inPadberg.[24]Bland's ruleprevents cycling and thus guarantees that the simplex algorithm always terminates.[24][29][30]Another pivoting algorithm, thecriss-cross algorithmnever cycles on linear programs.[31] History-based pivot rules such asZadeh's ruleandCunningham's rulealso try to circumvent the issue of stalling and cycling by keeping track of how often particular variables are being used and then favor such variables that have been used least often. The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such asFourier–Motzkin elimination. However, in 1972,Kleeand Minty[32]gave an example, theKlee–Minty cube, showing that the worst-case complexity of simplex method as formulated by Dantzig isexponential time. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation withpolynomial time, although sub-exponential pivot rules are known.[33] In 2014, it was proved[citation needed]that a particular variant of the simplex method isNP-mighty, i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are bothNP-hardproblems.[34]At about the same time it was shown that there exists an artificial pivot rule for which computing its output isPSPACE-complete.[35]In 2015, this was strengthened to show that computing the output of Dantzig's pivot rule isPSPACE-complete.[36] Analyzing and quantifying the observation that the simplex algorithm is efficient in practice despite its exponential worst-case complexity has led to the development of other measures of complexity. The simplex algorithm has polynomial-timeaverage-case complexityunder variousprobability distributions, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for therandom matrices.[37][38]Another approach to studying "typical phenomena" usesBaire category theoryfromgeneral topology, and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps.[citation needed] Another method to analyze the performance of the simplex algorithm studies the behavior of worst-case scenarios under small perturbation – are worst-case scenarios stable under a small change (in the sense ofstructural stability), or do they become tractable? This area of research, calledsmoothed analysis, was introduced specifically to study the simplex method. Indeed, the running time of the simplex method on input with noise is polynomial in the number of variables and the magnitude of the perturbations.[39][40] Other algorithms for solving linear-programming problems are described in thelinear-programmingarticle. Another basis-exchange pivoting algorithm is thecriss-cross algorithm.[41][42]There are polynomial-time algorithms for linear programming that use interior point methods: these includeKhachiyan'sellipsoidal algorithm,Karmarkar'sprojective algorithm, andpath-following algorithms.[15]TheBig-M methodis an alternative strategy for solving a linear program, using a single-phase simplex. Linear–fractional programming(LFP) is a generalization oflinear programming(LP). In LP the objective function is alinear function, while the objective function of a linear–fractional program is a ratio of two linear functions. In other words, a linear program is a fractional–linear program in which the denominator is the constant function having the value one everywhere. A linear–fractional program can be solved by a variant of the simplex algorithm[43][44][45][46]or by thecriss-cross algorithm.[47] These introductions are written for students ofcomputer scienceandoperations research:
https://en.wikipedia.org/wiki/Simplex_algorithm
Hardware-based full disk encryption(FDE) is available from manyhard disk drive(HDD/SSD) vendors, including:Hitachi, Integral Memory, iStorage Limited,Micron,Seagate Technology,Samsung,Toshiba,Viasat UK, andWestern Digital. Thesymmetric encryption keyis maintained independently from the computer'sCPU, thus allowing the complete data store to be encrypted and removing computer memory as a potential attack vector. Hardware-FDE has two major components: the hardware encryptor and the data store. There are currently four varieties of hardware-FDE in common use: Hardware designed for a particular purpose can often achieve better performance thandisk encryption software, and disk encryption hardware can be made more transparent to software than encryption done in software. As soon as the key has been initialised, the hardware should in principle be completely transparent to the OS and thus work with any OS. If the disk encryption hardware is integrated with the media itself the media may be designed for better integration. One example of such design would be through the use of physical sectors slightly larger than the logical sectors. Usually referred to asself-encrypting drive(SED). HDD FDE is made by HDD vendors using theOPALand Enterprise standards developed by theTrusted Computing Group.[1]Key managementtakes place within the hard disk controller and encryption keys are 128 or 256bitAdvanced Encryption Standard(AES) keys.Authenticationon power up of the drive must still take place within theCPUvia either asoftwarepre-boot authenticationenvironment (i.e., with asoftware-based full disk encryptioncomponent - hybrid full disk encryption) or with aBIOSpassword. In additions, some SEDs supportIEEE 1667standard.[2] Hitachi,Micron,Seagate,Samsung, andToshibaare the disk drive manufacturers offeringTrusted Computing GroupOpal Storage SpecificationSerial ATAdrives. HDDs have become a commodity so SED allow drive manufacturers to maintain revenue.[3]Older technologies include the proprietary Seagate DriveTrust, and the older, and less secure,PATASecurity command standard shipped by all drive makers includingWestern Digital. Enterprise SAS versions of the TCG standard are called "TCG Enterprise" drives. Within a standardhard drive form factorcase the encryptor (BC),keystore and a smaller form factor, commercially available, hard disk drive is enclosed. Examples includeViasat UK (formerly Stonewood Electronics)with their FlagStone, Eclypt[4]and DARC-ssd[5]drives or GuardDisk[6]with anRFIDtoken. The insertedhard driveFDE allows a standardform factorhard disk driveto be inserted into it. The concept can be seen on[7] The encryptor bridge and chipset (BC) is placed between the computer and the standard hard disk drive, encrypting every sector written to it. Intelannounced the release of the Danbury chipset[9]but has since abandoned this approach.[citation needed] Hardware-based encryption when built into the drive or within the drive enclosure is notably transparent to the user. The drive, except for bootup authentication, operates just like any drive, with no degradation in performance. There is no complication or performance overhead, unlikedisk encryption software, since all the encryption is invisible to theoperating systemand the hostcomputer's processor. The two main use cases areData at restprotection, and Cryptographic Disk Erasure. For Data at rest protection a computer or laptop is simply powered off. The disk now self-protects all the data on it. The data is safe because all of it, even the OS, is now encrypted, with a secure mode ofAES, and locked from reading and writing. The drive requires an authentication code which can be as strong as 32bytes (256bits) to unlock. Crypto-shreddingis the practice of 'deleting' data by (only) deleting or overwriting the encryption keys. When a cryptographic disk erasure (or crypto erase) command is given (with proper authentication credentials), the drive self-generates a new media encryption key and goes into a 'new drive' state.[10]Without the old key, the old data becomes irretrievable and therefore an efficient means of providingdisk sanitisationwhich can be a lengthy (and costly) process. For example, an unencrypted and unclassified computer hard drive that requires sanitising to conform withDepartment of DefenseStandards must be overwritten 3+ times;[11]a one Terabyte Enterprise SATA3 disk would take many hours to complete this process. Although the use of fastersolid-state drives(SSD) technologies improves this situation, the take up by enterprise has so far been slow.[12]The problem will worsen as disk sizes increase every year. With encrypted drives a complete and secure data erasure action takes just a few milliseconds with a simple key change, so a drive can be safely repurposed very quickly. This sanitisation activity is protected in SEDs by the drive's own key management system built into the firmware in order to prevent accidental data erasure with confirmation passwords and secure authentications related to the original key required. Whenkeysare self-generated randomly, generally there is no method to store a copy to allowdata recovery. In this case protecting this data from accidental loss or theft is achieved through a consistent and comprehensive data backup policy. The other method is for user-defined keys, for some Enclosed hard disk drive FDE,[13]to be generated externally and then loaded into the FDE. Recent hardware models circumventsbootingfrom other devices and allowing access by using a dualMaster Boot Record(MBR) system whereby the MBR for the operating system and data files is all encrypted along with a special MBR which is required to boot theoperating system. In SEDs, all data requests are intercepted by theirfirmware, that does not allow decryption to take place unless the system has beenbootedfrom the special SEDoperating systemwhich then loads theMBRof the encrypted part of the drive. This works by having a separatepartition, hidden from view, which contains the proprietaryoperating systemfor the encryption management system. This means no other boot methods will allow access to the drive.[citation needed] Typically FDE, once unlocked, will remain unlocked as long as power is provided.[14]Researchers atUniversität Erlangen-Nürnberghave demonstrated a number of attacks based on moving the drive to another computer without cutting power.[14]Additionally, it may be possible to reboot the computer into an attacker-controlled operating system without cutting power to the drive. When a computer with a self-encrypting drive is put intosleep mode, the drive is powered down, but the encryption password is retained in memory so that the drive can be quickly resumed without requesting the password. An attacker can take advantage of this to gain easier physical access to the drive, for instance, by inserting extension cables.[14] The firmware of the drive may be compromised[15][16]and so any data that is sent to it may be at risk. Even if the data is encrypted on the physical medium of the drive, the fact that the firmware is controlled by a malicious third-party means that it can be decrypted by that third-party. If data is encrypted by the operating system, and it is sent in a scrambled form to the drive, then it would not matter if the firmware is malicious or not. Hardware solutions have gained criticism for being poorly documented. Many aspects of how the encryption is done are not published by the vendor. This leaves the user with little possibility to judge the security of the product and potential attack methods. It also increases the risk of avendor lock-in. In addition, implementing system wide hardware-based full disk encryption is prohibitive for many companies due to the high cost of replacing existing hardware. This makes migrating to hardware encryption technologies more difficult and would generally require a clear migration and central management solution for both hardware- and software-basedfull disk encryptionsolutions.[17]however Enclosed hard disk drive FDE and Removable Hard Drive FDE are often installed on a single drive basis.
https://en.wikipedia.org/wiki/Disk_encryption_hardware
Inphysics, agauge theoryis a type offield theoryin which theLagrangian, and hence the dynamics of the system itself, does not change underlocal transformationsaccording to certain smooth families of operations (Lie groups). Formally, the Lagrangian isinvariantunder these transformations. The term "gauge" refers to any specific mathematical formalism to regulate redundantdegrees of freedomin the Lagrangian of a physical system. The transformations between possible gauges, calledgauge transformations, form a Lie group—referred to as thesymmetry groupor thegauge groupof the theory. Associated with any Lie group is theLie algebraofgroup generators. For each group generator there necessarily arises a correspondingfield(usually avector field) called thegauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (calledgauge invariance). When such a theory isquantized, thequantaof the gauge fields are calledgauge bosons. If the symmetry group is non-commutative, then the gauge theory is referred to asnon-abeliangauge theory, the usual example being theYang–Mills theory. Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed ateverypointin thespacetimein which the physical processes occur, they are said to have aglobal symmetry.Local symmetry, the cornerstone of gauge theories, is a stronger constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime (the same way a constant value can be understood as a function of a certain parameter, the output of which is always the same). Gauge theories are important as the successful field theories explaining the dynamics ofelementary particles.Quantum electrodynamicsis anabeliangauge theory with the symmetry groupU(1)and has one gauge field, theelectromagnetic four-potential, with thephotonbeing the gauge boson. TheStandard Modelis a non-abelian gauge theory with the symmetry group U(1) ×SU(2)×SU(3)and has a total of twelve gauge bosons: thephoton, threeweak bosonsand eightgluons. Gauge theories are also important in explaininggravitationin the theory ofgeneral relativity. Its case is somewhat unusual in that the gauge field is atensor, theLanczos tensor. Theories ofquantum gravity, beginning withgauge gravitation theory, also postulate the existence of a gauge boson known as thegraviton. Gauge symmetries can be viewed as analogues of theprinciple of general covarianceof general relativity in which the coordinate system can be chosen freely under arbitrarydiffeomorphismsof spacetime. Both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation,gauge theory gravity, replaces the principle of general covariance with a true gauge principle with new gauge fields. Historically, these ideas were first stated in the context ofclassical electromagnetismand later ingeneral relativity. However, the modern importance of gauge symmetries appeared first in therelativistic quantum mechanicsofelectrons–quantum electrodynamics, elaborated on below. Today, gauge theories are useful incondensed matter,nuclearandhigh energy physicsamong other subfields. The concept and the name of gauge theory derives from the work ofHermann Weylin 1918.[1]Weyl, in an attempt to generalize the geometrical ideas ofgeneral relativityto includeelectromagnetism, conjectured thatEichinvarianzor invariance under the change ofscale(or "gauge") might also be a local symmetry of general relativity. After the development ofquantum mechanics, Weyl,Vladimir Fock[2]andFritz Londonreplaced the simple scale factor with acomplexquantity and turned the scale transformation into a change ofphase, which is aU(1)gauge symmetry. This explained theelectromagnetic fieldeffect on thewave functionof achargedquantum mechanicalparticle. Weyl's 1929 paper introduced the modern concept of gauge invariance[3]subsequently popularized byWolfgang Pauliin his 1941 review.[4]In retrospect,James Clerk Maxwell's formulation, in 1864–65, ofelectrodynamicsin "A Dynamical Theory of the Electromagnetic Field" suggested the possibility of invariance, when he stated that any vector field whose curl vanishes—and can therefore normally be written as agradientof a function—could be added to the vector potential without affecting themagnetic field. Similarly unnoticed,David Hilberthad derived theEinstein field equationsby postulating the invariance of theactionunder a general coordinate transformation. The importance of these symmetry invariances remained unnoticed until Weyl's work. Inspired by Pauli's descriptions of connection between charge conservation and field theory driven by invariance,Chen Ning Yangsought a field theory foratomic nucleibinding based on conservation of nuclearisospin.[5]: 202In 1954, Yang andRobert Millsgeneralized the gauge invariance of electromagnetism, constructing a theory based on the action of the (non-abelian) SU(2) symmetrygroupon theisospindoublet ofprotonsandneutrons.[6]This is similar to the action of theU(1)group on thespinorfieldsofquantum electrodynamics. TheYang–Mills theorybecame the prototype theory to resolve some of the confusion inelementary particle physics. This idea later found application in thequantum field theoryof theweak force, and its unification with electromagnetism in theelectroweaktheory. Gauge theories became even more attractive when it was realized that non-abelian gauge theories reproduced a feature calledasymptotic freedom. Asymptotic freedom was believed to be an important characteristic of strong interactions. This motivated searching for a strong force gauge theory. This theory, now known asquantum chromodynamics, is a gauge theory with the action of the SU(3) group on thecolortriplet ofquarks. TheStandard Modelunifies the description of electromagnetism, weak interactions and strong interactions in the language of gauge theory. In the 1970s,Michael Atiyahbegan studying the mathematics of solutions to the classicalYang–Millsequations. In 1983, Atiyah's studentSimon Donaldsonbuilt on this work to show that thedifferentiableclassification ofsmooth4-manifoldsis very different from their classificationup tohomeomorphism.[7]Michael Freedmanused Donaldson's work to exhibitexoticR4s, that is, exoticdifferentiable structuresonEuclidean4-dimensional space. This led to an increasing interest in gauge theory for its own sake, independent of its successes in fundamental physics. In 1994,Edward WittenandNathan Seiberginvented gauge-theoretic techniques based onsupersymmetrythat enabled the calculation of certaintopologicalinvariants[8][9](theSeiberg–Witten invariants). These contributions to mathematics from gauge theory have led to a renewed interest in this area. The importance of gauge theories in physics is exemplified in the success of the mathematical formalism in providing a unified framework to describe thequantum field theoriesofelectromagnetism, theweak forceand thestrong force. This theory, known as theStandard Model, accurately describes experimental predictions regarding three of the fourfundamental forcesof nature, and is a gauge theory with the gauge groupSU(3) × SU(2) × U(1). Modern theories likestring theory, as well asgeneral relativity, are, in one way or another, gauge theories. Inphysics, the mathematical description of any physical situation usually contains excessdegrees of freedom; the same physical situation is equally well described by many equivalent mathematical configurations. For instance, inNewtonian dynamics, if two configurations are related by aGalilean transformation(aninertialchange of reference frame) they represent the same physical situation. These transformations form agroupof "symmetries" of the theory, and a physical situation corresponds not to an individual mathematical configuration but to a class of configurations related to one another by this symmetry group. This idea can be generalized to include local as well as global symmetries, analogous to much more abstract "changes of coordinates" in a situation where there is no preferred "inertial" coordinate system that covers the entire physical system. A gauge theory is a mathematical model that has symmetries of this kind, together with a set of techniques for making physical predictions consistent with the symmetries of the model. When a quantity occurring in the mathematical configuration is not just a number but has some geometrical significance, such as a velocity or an axis of rotation, its representation as numbers arranged in a vector or matrix is also changed by a coordinate transformation. For instance, if one description of a pattern of fluid flow states that the fluid velocity in the neighborhood of (x= 1,y= 0) is 1 m/s in the positivexdirection, then a description of the same situation in which the coordinate system has been rotated clockwise by 90 degrees states that the fluid velocity in the neighborhood of (x= 0,y= −1) is 1 m/s in the negativeydirection. The coordinate transformation has affected both the coordinate system used to identify thelocationof the measurement and the basis in which itsvalueis expressed. As long as this transformation is performed globally (affecting the coordinate basis in the same way at every point), the effect on values that represent therate of changeof some quantity along some path in space and time as it passes through pointPis the same as the effect on values that are truly local toP. In order to adequately describe physical situations in more complex theories, it is often necessary to introduce a "coordinate basis" for some of the objects of the theory that do not have this simple relationship to the coordinates used to label points in space and time. (In mathematical terms, the theory involves afiber bundlein which the fiber at each point of the base space consists of possible coordinate bases for use when describing the values of objects at that point.) In order to spell out a mathematical configuration, one must choose a particular coordinate basis at each point (alocal sectionof the fiber bundle) and express the values of the objects of the theory (usually "fields" in the physicist's sense) using this basis. Two such mathematical configurations are equivalent (describe the same physical situation) if they are related by a transformation of this abstract coordinate basis (a change of local section, orgauge transformation). In most gauge theories, the set of possible transformations of the abstract gauge basis at an individual point in space and time is a finite-dimensional Lie group. The simplest such group isU(1), which appears in the modern formulation ofquantum electrodynamics (QED)via its use ofcomplex numbers. QED is generally regarded as the first, and simplest, physical gauge theory. The set of possible gauge transformations of the entire configuration of a given gauge theory also forms a group, thegauge groupof the theory. An element of the gauge group can be parameterized by a smoothly varying function from the points of spacetime to the (finite-dimensional) Lie group, such that the value of the function and its derivatives at each point represents the action of the gauge transformation on the fiber over that point. A gauge transformation with constant parameter at every point in space and time is analogous to a rigid rotation of the geometric coordinate system; it represents aglobal symmetryof the gauge representation. As in the case of a rigid rotation, this gauge transformation affects expressions that represent the rate of change along a path of some gauge-dependent quantity in the same way as those that represent a truly local quantity. A gauge transformation whose parameter isnota constant function is referred to as alocal symmetry; its effect on expressions that involve aderivativeis qualitatively different from that on expressions that do not. (This is analogous to a non-inertial change of reference frame, which can produce aCoriolis effect.) The "gauge covariant" version of a gauge theory accounts for this effect by introducing agauge field(in mathematical language, anEhresmann connection) and formulating all rates of change in terms of thecovariant derivativewith respect to this connection. The gauge field becomes an essential part of the description of a mathematical configuration. A configuration in which the gauge field can be eliminated by a gauge transformation has the property that itsfield strength(in mathematical language, itscurvature) is zero everywhere; a gauge theory isnotlimited to these configurations. In other words, the distinguishing characteristic of a gauge theory is that the gauge field does not merely compensate for a poor choice of coordinate system; there is generally no gauge transformation that makes the gauge field vanish. When analyzing thedynamicsof a gauge theory, the gauge field must be treated as a dynamical variable, similar to other objects in the description of a physical situation. In addition to itsinteractionwith other objects via the covariant derivative, the gauge field typically contributesenergyin the form of a "self-energy" term. One can obtain the equations for the gauge theory by: This is the sense in which a gauge theory "extends" a global symmetry to a local symmetry, and closely resembles the historical development of the gauge theory of gravity known asgeneral relativity. Gauge theories used to model the results of physical experiments engage in: We cannot express the mathematical descriptions of the "setup information" and the "possible measurement outcomes", or the "boundary conditions" of the experiment, without reference to a particular coordinate system, including a choice of gauge. One assumes an adequate experiment isolated from "external" influence that is itself a gauge-dependent statement. Mishandling gauge dependence calculations in boundary conditions is a frequent source ofanomalies, and approaches to anomaly avoidance classifies gauge theories[clarification needed]. The two gauge theories mentioned above, continuum electrodynamics and general relativity, are continuum field theories. The techniques of calculation in acontinuum theoryimplicitly assume that: Determination of the likelihood of possible measurement outcomes proceed by: These assumptions have enough validity across a wide range of energy scales and experimental conditions to allow these theories to make accurate predictions about almost all of the phenomena encountered in daily life: light, heat, and electricity, eclipses, spaceflight, etc. They fail only at the smallest and largest scales due to omissions in the theories themselves, and when the mathematical techniques themselves break down, most notably in the case ofturbulenceand otherchaoticphenomena. Other than these classical continuum field theories, the most widely known gauge theories arequantum field theories, includingquantum electrodynamicsand theStandard Modelof elementary particle physics. The starting point of a quantum field theory is much like that of its continuum analog: a gauge-covariantaction integralthat characterizes "allowable" physical situations according to theprinciple of least action. However, continuum and quantum theories differ significantly in how they handle the excess degrees of freedom represented by gauge transformations. Continuum theories, and most pedagogical treatments of the simplest quantum field theories, use agauge fixingprescription to reduce the orbit of mathematical configurations that represent a given physical situation to a smaller orbit related by a smaller gauge group (the global symmetry group, or perhaps even the trivial group). More sophisticated quantum field theories, in particular those that involve a non-abelian gauge group, break the gauge symmetry within the techniques ofperturbation theoryby introducing additional fields (theFaddeev–Popov ghosts) and counterterms motivated byanomaly cancellation, in an approach known asBRST quantization. While these concerns are in one sense highly technical, they are also closely related to the nature of measurement, the limits on knowledge of a physical situation, and the interactions between incompletely specified experimental conditions and incompletely understood physical theory.[citation needed]The mathematical techniques that have been developed in order to make gauge theories tractable have found many other applications, fromsolid-state physicsandcrystallographytolow-dimensional topology. Inelectrostatics, one can either discuss the electric field,E, or its correspondingelectric potential,V. Knowledge of one makes it possible to find the other, except that potentials differing by a constant,V↦V+C{\displaystyle V\mapsto V+C}, correspond to the same electric field. This is because the electric field relates tochangesin the potential from one point in space to another, and the constantCwould cancel out when subtracting to find the change in potential. In terms ofvector calculus, the electric field is thegradientof the potential,E=−∇V{\displaystyle \mathbf {E} =-\nabla V}. Generalizing from static electricity to electromagnetism, we have a second potential, thevector potentialA, with The general gauge transformations now become not justV↦V+C{\displaystyle V\mapsto V+C}but wherefis any twice continuously differentiable function that depends on position and time. The electromagnetic fields remain the same under the gauge transformation. The following illustrates how local gauge invariance can be "motivated" heuristically starting from global symmetry properties, and how it leads to an interaction between originally non-interacting fields. Consider a set ofn{\displaystyle n}non-interacting realscalar fields, with equal massesm. This system is described by anactionthat is the sum of the (usual) action for each scalar fieldφi{\displaystyle \varphi _{i}} The Lagrangian (density) can be compactly written as by introducing avectorof fields The term∂μΦ{\displaystyle \partial _{\mu }\Phi }is thepartial derivativeofΦ{\displaystyle \Phi }along dimensionμ{\displaystyle \mu }. It is now transparent that the Lagrangian is invariant under the transformation wheneverGis aconstantmatrixbelonging to then-by-northogonal groupO(n). This is seen to preserve the Lagrangian, since the derivative ofΦ′{\displaystyle \Phi '}transforms identically toΦ{\displaystyle \Phi }and both quantities appear insidedot productsin the Lagrangian (orthogonal transformations preserve the dot product). This characterizes theglobalsymmetry of this particular Lagrangian, and the symmetry group is often called thegauge group; the mathematical term isstructure group, especially in the theory ofG-structures. Incidentally,Noether's theoremimplies that invariance under this group of transformations leads to the conservation of thecurrents where theTamatrices aregeneratorsof the SO(n) group. There is one conserved current for every generator. Now, demanding that this Lagrangian should havelocalO(n)-invariance requires that theGmatrices (which were earlier constant) should be allowed to become functions of thespacetimecoordinatesx. In this case, theGmatrices do not "pass through" the derivatives, whenG=G(x), The failure of the derivative to commute with "G" introduces an additional term (in keeping with the product rule), which spoils the invariance of the Lagrangian. In order to rectify this we define a new derivative operator such that the derivative ofΦ′{\displaystyle \Phi '}again transforms identically withΦ{\displaystyle \Phi } This new "derivative" is called a(gauge) covariant derivativeand takes the form wheregis called the coupling constant; a quantity defining the strength of an interaction. After a simple calculation we can see that thegauge fieldA(x) must transform as follows The gauge field is an element of the Lie algebra, and can therefore be expanded as There are therefore as many gauge fields as there are generators of the Lie algebra. Finally, we now have alocally gauge invariantLagrangian Pauli uses the termgauge transformation of the first typeto mean the transformation ofΦ{\displaystyle \Phi }, while the compensating transformation inA{\displaystyle A}is called agauge transformation of the second type. The difference between this Lagrangian and the originalglobally gauge-invariantLagrangian is seen to be theinteraction Lagrangian This term introducesinteractionsbetween thenscalar fields just as a consequence of the demand for local gauge invariance. However, to make this interaction physical and not completely arbitrary, the mediatorA(x) needs to propagate in space. That is dealt with in the next section by adding yet another term,Lgf{\displaystyle {\mathcal {L}}_{\mathrm {gf} }}, to the Lagrangian. In thequantizedversion of the obtainedclassical field theory, thequantaof the gauge fieldA(x) are calledgauge bosons. The interpretation of the interaction Lagrangian in quantum field theory is ofscalarbosonsinteracting by the exchange of these gauge bosons. The picture of a classical gauge theory developed in the previous section is almost complete, except for the fact that to define the covariant derivativesD, one needs to know the value of the gauge fieldA(x){\displaystyle A(x)}at all spacetime points. Instead of manually specifying the values of this field, it can be given as the solution to a field equation. Further requiring that the Lagrangian that generates this field equation is locally gauge invariant as well, one possible form for the gauge field Lagrangian is where theFμνa{\displaystyle F_{\mu \nu }^{a}}are obtained from potentialsAμa{\displaystyle A_{\mu }^{a}}, being the components ofA(x){\displaystyle A(x)}, by and thefabc{\displaystyle f^{abc}}are thestructure constantsof the Lie algebra of the generators of the gauge group. This formulation of the Lagrangian is called aYang–Mills action. Other gauge invariant actions also exist (e.g.,nonlinear electrodynamics,Born–Infeld action,Chern–Simons model,theta term, etc.). In this Lagrangian term there is no field whose transformation counterweighs the one ofA{\displaystyle A}. Invariance of this term under gauge transformations is a particular case ofa prioriclassical (geometrical) symmetry. This symmetry must be restricted in order to perform quantization, the procedure being denominatedgauge fixing, but even after restriction, gauge transformations may be possible.[12] The complete Lagrangian for the gauge theory is now As a simple application of the formalism developed in the previous sections, consider the case ofelectrodynamics, with only theelectronfield. The bare-bones action that generates the electron field'sDirac equationis The global symmetry for this system is The gauge group here isU(1), just rotations of thephase angleof the field, with the particular rotation determined by the constantθ. "Localising" this symmetry implies the replacement ofθbyθ(x). An appropriate covariant derivative is then Identifying the "charge"e(not to be confused with the mathematical constantein the symmetry description) with the usualelectric charge(this is the origin of the usage of the term in gauge theories), and the gauge fieldA(x)with the four-vector potentialof theelectromagnetic fieldresults in an interaction Lagrangian whereJμ(x)=eℏψ¯(x)γμψ(x){\displaystyle J^{\mu }(x)={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)}is the electric currentfour vectorin theDirac field. Thegauge principleis therefore seen to naturally introduce the so-calledminimal couplingof the electromagnetic field to the electron field. Adding a Lagrangian for the gauge fieldAμ(x){\displaystyle A_{\mu }(x)}in terms of thefield strength tensorexactly as in electrodynamics, one obtains the Lagrangian used as the starting point inquantum electrodynamics. Gauge theories are usually discussed in the language ofdifferential geometry. Mathematically, agaugeis just a choice of a (local)sectionof someprincipal bundle. Agauge transformationis just a transformation between two such sections. Although gauge theory is dominated by the study ofconnections(primarily because it's mainly studied byhigh-energy physicists), the idea of a connection is not central to gauge theory in general. In fact, a result in general gauge theory shows thataffine representations(i.e., affinemodules) of the gauge transformations can be classified as sections of ajet bundlesatisfying certain properties. There are representations that transform covariantly pointwise (called by physicists gauge transformations of the first kind), representations that transform as aconnection form(called by physicists gauge transformations of the second kind, an affine representation)—and other more general representations, such as the B field inBF theory. There are more generalnonlinear representations(realizations), but these are extremely complicated. Still,nonlinear sigma modelstransform nonlinearly, so there are applications. If there is aprincipal bundlePwhosebase spaceisspaceorspacetimeandstructure groupis a Lie group, then the sections ofPform aprincipal homogeneous spaceof the group of gauge transformations. Connections(gauge connection) define this principal bundle, yielding acovariant derivative∇ in eachassociated vector bundle. If a local frame is chosen (a local basis of sections), then this covariant derivative is represented by theconnection formA, a Lie algebra-valued1-form, which is called thegauge potentialinphysics. This is evidently not an intrinsic but a frame-dependent quantity. Thecurvature formF, a Lie algebra-valued2-formthat is an intrinsic quantity, is constructed from a connection form by where d stands for theexterior derivativeand∧{\displaystyle \wedge }stands for thewedge product. (A{\displaystyle \mathbf {A} }is an element of the vector space spanned by the generatorsTa{\displaystyle T^{a}}, and so the components ofA{\displaystyle \mathbf {A} }do not commute with one another. Hence the wedge productA∧A{\displaystyle \mathbf {A} \wedge \mathbf {A} }does not vanish.) Infinitesimal gauge transformations form a Lie algebra, which is characterized by a smooth Lie-algebra-valuedscalar, ε. Under such aninfinitesimalgauge transformation, where[⋅,⋅]{\displaystyle [\cdot ,\cdot ]}is the Lie bracket. One nice thing is that ifδεX=εX{\displaystyle \delta _{\varepsilon }X=\varepsilon X}, thenδεDX=εDX{\displaystyle \delta _{\varepsilon }DX=\varepsilon DX}where D is the covariant derivative Also,δεF=[ε,F]{\displaystyle \delta _{\varepsilon }\mathbf {F} =[\varepsilon ,\mathbf {F} ]}, which meansF{\displaystyle \mathbf {F} }transforms covariantly. Not all gauge transformations can be generated byinfinitesimalgauge transformations in general. An example is when thebase manifoldis acompactmanifoldwithoutboundarysuch that thehomotopyclass of mappings from thatmanifoldto the Lie group is nontrivial. Seeinstantonfor an example. TheYang–Mills actionis now given by where⋆{\displaystyle {\star }}is theHodge star operatorand the integral is defined as indifferential geometry. A quantity which isgauge-invariant(i.e.,invariantunder gauge transformations) is theWilson loop, which is defined over any closed path, γ, as follows: whereχis thecharacterof a complexrepresentationρ andP{\displaystyle {\mathcal {P}}}represents the path-ordered operator. The formalism of gauge theory carries over to a general setting. For example, it is sufficient to ask that avector bundlehave ametric connection; when one does so, one finds that the metric connection satisfies the Yang–Mills equations of motion. Gauge theories may be quantized by specialization of methods which are applicable to anyquantum field theory. However, because of the subtleties imposed by the gauge constraints (see section on Mathematical formalism, above) there are many technical problems to be solved which do not arise in other field theories. At the same time, the richer structure of gauge theories allows simplification of some computations: for exampleWard identitiesconnect differentrenormalizationconstants. The first gauge theory quantized wasquantum electrodynamics(QED). The first methods developed for this involved gauge fixing and then applyingcanonical quantization. TheGupta–Bleulermethod was also developed to handle this problem. Non-abelian gauge theories are now handled by a variety of means. Methods for quantization are covered in the article onquantization. The main point to quantization is to be able to computequantum amplitudesfor various processes allowed by the theory. Technically, they reduce to the computations of certaincorrelation functionsin thevacuum state. This involves arenormalizationof the theory. When therunning couplingof the theory is small enough, then all required quantities may be computed inperturbation theory. Quantization schemes intended to simplify such computations (such ascanonical quantization) may be calledperturbative quantization schemes. At present some of these methods lead to the most precise experimental tests of gauge theories. However, in most gauge theories, there are many interesting questions which are non-perturbative. Quantization schemes suited to these problems (such aslattice gauge theory) may be callednon-perturbative quantization schemes. Precise computations in such schemes often requiresupercomputing, and are therefore less well-developed currently than other schemes. Some of the symmetries of the classical theory are then seen not to hold in the quantum theory; a phenomenon called ananomaly. Among the most well known are: A pure gauge is the set of field configurations obtained by agauge transformationon the null-field configuration, i.e., a gauge transform of zero. So it is a particular "gauge orbit" in the field configuration's space. Thus, in the abelian case, whereAμ(x)→Aμ′(x)=Aμ(x)+∂μf(x){\displaystyle A_{\mu }(x)\rightarrow A'_{\mu }(x)=A_{\mu }(x)+\partial _{\mu }f(x)}, the pure gauge is just the set of field configurationsAμ′(x)=∂μf(x){\displaystyle A'_{\mu }(x)=\partial _{\mu }f(x)}for allf(x).
https://en.wikipedia.org/wiki/Gauge_theory
OpenAthensis anidentity and access managementservice, supplied byJisc, a Britishnot-for-profitinformation technology services company.Identity provider(IdP) organisations can keepusernamesin the cloud, locally or both. Integration withADFS,LDAPorSAMLis supported.[1] OpenAthens for Publishers[2]software forservice providerssupports multiple platforms andfederations. Technically, the service providesdeep packet inspectionproxying (in a similar manner toEZproxy) andSAML-based federation,[3]as well as various on-boarding services for institutions, consortia and vendors. With its origins in aUniversity of Bathinitiative to reduce IT procurement costs for itself and other universities, the Athens project was conceived in 1996. Spun off from Bath University through the vehicle of charitable status, Eduserv was established as a not-for-profit organisation in 1999.[4] The service was originally namedAthenaafter the Greek goddess of knowledge and learning; it is rumoured that the name change was partially caused by a common typo, but it was actually due to the name Athena being already trademarked (EU000204735).[5]It launched as 'Athens' in 1997 (UK00002153200).[6]After JISC decided to supportShibbolethrather than Athens in 2008, Eduserv launched a federated version of Athens as 'OpenAthens'[7](EU013713821).[8]
https://en.wikipedia.org/wiki/OpenAthens
In mathematics, thelogarithmic normis a real-valuedfunctionalonoperators, and is derived from either aninner product, a vector norm, or its inducedoperator norm. The logarithmic norm was independently introduced byGermund Dahlquist[1]and Sergei Lozinskiĭ in 1958, for squarematrices. It has since been extended to nonlinear operators andunbounded operatorsas well.[2]The logarithmic norm has a wide range of applications, in particular in matrix theory,differential equationsandnumerical analysis. In the finite-dimensional setting, it is also referred to as the matrix measure or the Lozinskiĭ measure. LetA{\displaystyle A}be a square matrix and‖⋅‖{\displaystyle \|\cdot \|}be an induced matrix norm. The associated logarithmic normμ{\displaystyle \mu }ofA{\displaystyle A}is defined by HereI{\displaystyle I}is theidentity matrixof the same dimension asA{\displaystyle A}, andh{\displaystyle h}is a real, positive number. The limit ash→0−{\displaystyle h\rightarrow 0^{-}}equals−μ(−A){\displaystyle -\mu (-A)}, and is in general different from the logarithmic normμ(A){\displaystyle \mu (A)}, as−μ(−A)≤μ(A){\displaystyle -\mu (-A)\leq \mu (A)}for all matrices. The matrix norm‖A‖{\displaystyle \|A\|}is always positive ifA≠0{\displaystyle A\neq 0}, but the logarithmic normμ(A){\displaystyle \mu (A)}may also take negative values, e.g. whenA{\displaystyle A}isnegative definite. Therefore, the logarithmic norm does not satisfy the axioms of a norm. The namelogarithmic norm,which does not appear in the original reference, seems to originate from estimating the logarithm of the norm of solutions to the differential equation The maximal growth rate oflog⁡‖x‖{\displaystyle \log \|x\|}isμ(A){\displaystyle \mu (A)}. This is expressed by the differential inequality whered/dt+{\displaystyle \mathrm {d} /\mathrm {d} t^{+}}is theupper right Dini derivative. Usinglogarithmic differentiationthe differential inequality can also be written showing its direct relation toGrönwall's lemma. In fact, it can be shown that the norm of the state transition matrixΦ(t,t0){\displaystyle \Phi (t,t_{0})}associated to the differential equationx˙=A(t)x{\displaystyle {\dot {x}}=A(t)x}is bounded by[3][4] for allt≥t0{\displaystyle t\geq t_{0}}. If the vector norm is an inner product norm, as in aHilbert space, then the logarithmic norm is the smallest numberμ(A){\displaystyle \mu (A)}such that for allx{\displaystyle x} Unlike the original definition, the latter expression also allowsA{\displaystyle A}to be unbounded. Thusdifferential operatorstoo can have logarithmic norms, allowing the use of the logarithmic norm both in algebra and in analysis. The modern, extended theory therefore prefers a definition based on inner products orduality. Both the operator norm and the logarithmic norm are then associated with extremal values ofquadratic formsas follows: Basic properties of the logarithmic norm of a matrix include: The logarithmic norm of a matrix can be calculated as follows for the three most common norms. In these formulas,aij{\displaystyle a_{ij}}represents the element on thei{\displaystyle i}th row andj{\displaystyle j}th column of a matrixA{\displaystyle A}.[5] The logarithmic norm is related to the extreme values of the Rayleigh quotient. It holds that and both extreme values are taken for some vectorsx≠0{\displaystyle x\neq 0}. This also means that every eigenvalueλk{\displaystyle \lambda _{k}}ofA{\displaystyle A}satisfies More generally, the logarithmic norm is related to thenumerical rangeof a matrix. A matrix with−μ(−A)>0{\displaystyle -\mu (-A)>0}is positive definite, and one withμ(A)<0{\displaystyle \mu (A)<0}is negative definite. Such matrices haveinverses. The inverse of a negative definite matrix is bounded by Both the bounds on the inverse and on the eigenvalues hold irrespective of the choice of vector (matrix) norm. Some results only hold for inner product norms, however. For example, ifR{\displaystyle R}is a rational function with the property then, for inner product norms, Thus the matrix norm and logarithmic norms may be viewed as generalizing the modulus and real part, respectively, from complex numbers to matrices. The logarithmic norm plays an important role in the stability analysis of a continuous dynamical systemx˙=Ax{\displaystyle {\dot {x}}=Ax}. Its role is analogous to that of the matrix norm for a discrete dynamical systemxn+1=Axn{\displaystyle x_{n+1}=Ax_{n}}. In the simplest case, whenA{\displaystyle A}is a scalar complex constantλ{\displaystyle \lambda }, the discrete dynamical system has stable solutions when|λ|≤1{\displaystyle |\lambda |\leq 1}, while the differential equation has stable solutions whenℜλ≤0{\displaystyle \Re \,\lambda \leq 0}. WhenA{\displaystyle A}is a matrix, the discrete system has stable solutions if‖A‖≤1{\displaystyle \|A\|\leq 1}. In the continuous system, the solutions are of the formetAx(0){\displaystyle \mathrm {e} ^{tA}x(0)}. They are stable if‖etA‖≤1{\displaystyle \|\mathrm {e} ^{tA}\|\leq 1}for allt≥0{\displaystyle t\geq 0}, which follows from property 7 above, ifμ(A)≤0{\displaystyle \mu (A)\leq 0}. In the latter case,‖x‖{\displaystyle \|x\|}is aLyapunov functionfor the system. Runge–Kutta methodsfor the numerical solution ofx˙=Ax{\displaystyle {\dot {x}}=Ax}replace the differential equation by a discrete equationxn+1=R(hA)⋅xn{\displaystyle x_{n+1}=R(hA)\cdot x_{n}}, where the rational functionR{\displaystyle R}is characteristic of the method, andh{\displaystyle h}is the time step size. If|R(z)|≤1{\displaystyle |R(z)|\leq 1}wheneverℜ(z)≤0{\displaystyle \Re \,(z)\leq 0}, then a stable differential equation, havingμ(A)≤0{\displaystyle \mu (A)\leq 0}, will always result in a stable (contractive) numerical method, as‖R(hA)‖≤1{\displaystyle \|R(hA)\|\leq 1}. Runge-Kutta methods having this property are called A-stable. Retaining the same form, the results can, under additional assumptions, be extended to nonlinear systems as well as tosemigrouptheory, where the crucial advantage of the logarithmic norm is that it discriminates between forward and reverse time evolution and can establish whether the problem iswell posed. Similar results also apply in the stability analysis incontrol theory, where there is a need to discriminate between positive and negative feedback. In connection with differential operators it is common to use inner products andintegration by parts. In the simplest case we consider functions satisfyingu(0)=u(1)=0{\displaystyle u(0)=u(1)=0}with inner product Then it holds that where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality[citation needed]. In the latter, equality is attained for the functionsinπx{\displaystyle \sin \,\pi x}, implying that the constant−π2{\displaystyle -\pi ^{2}}is the best possible. Thus for the differential operatorA=d2/dx2{\displaystyle A=\mathrm {d} ^{2}/\mathrm {d} x^{2}}, which implies that As an operator satisfying⟨u,Au⟩>0{\displaystyle \langle u,Au\rangle >0}is calledelliptic, the logarithmic norm quantifies the (strong) ellipticity of−d2/dx2{\displaystyle -\mathrm {d} ^{2}/\mathrm {d} x^{2}}. Thus, ifA{\displaystyle A}is strongly elliptic, thenμ(−A)<0{\displaystyle \mu (-A)<0}, and is invertible given proper data. If a finite difference method is used to solve−u″=f{\displaystyle -u''=f}, the problem is replaced by an algebraic equationTu=f{\displaystyle Tu=f}. The matrixT{\displaystyle T}will typically inherit the ellipticity, i.e.,−μ(−T)>0{\displaystyle -\mu (-T)>0}, showing thatT{\displaystyle T}is positive definite and therefore invertible. These results carry over to thePoisson equationas well as to other numerical methods such as theFinite element method. For nonlinear operators the operator norm and logarithmic norm are defined in terms of the inequalities whereL(f){\displaystyle L(f)}is the least upper boundLipschitz constantoff{\displaystyle f}, andl(f){\displaystyle l(f)}is the greatest lower bound Lipschitz constant; and whereu{\displaystyle u}andv{\displaystyle v}are in the domainD{\displaystyle D}off{\displaystyle f}. HereM(f){\displaystyle M(f)}is the least upper bound logarithmic Lipschitz constant off{\displaystyle f}, andl(f){\displaystyle l(f)}is the greatest lower bound logarithmic Lipschitz constant. It holds thatm(f)=−M(−f){\displaystyle m(f)=-M(-f)}(compare above) and, analogously,l(f)=L(f−1)−1{\displaystyle l(f)=L(f^{-1})^{-1}}, whereL(f−1){\displaystyle L(f^{-1})}is defined on the image off{\displaystyle f}. For nonlinear operators that are Lipschitz continuous, it further holds that Iff{\displaystyle f}is differentiable and its domainD{\displaystyle D}is convex, then Heref′(x){\displaystyle f'(x)}is theJacobian matrixoff{\displaystyle f}, linking the nonlinear extension to the matrix norm and logarithmic norm. An operator having eitherm(f)>0{\displaystyle m(f)>0}orM(f)<0{\displaystyle M(f)<0}is called uniformly monotone. An operator satisfyingL(f)<1{\displaystyle L(f)<1}is calledcontractive. This extension offers many connections to fixed point theory, and critical point theory. The theory becomes analogous to that of the logarithmic norm for matrices, but is more complicated as the domains of the operators need to be given close attention, as in the case with unbounded operators. Property 8 of the logarithmic norm above carries over, independently of the choice of vector norm, and it holds that which quantifies theUniform Monotonicity Theoremdue to Browder & Minty (1963).
https://en.wikipedia.org/wiki/Logarithmic_norm
Flash fictionis a brief fictional narrative[1]that still offers character and plot development. Identified varieties, many of them defined byword count, include thesix-word story;[2]the 280-character story (also known as "twitterature");[3]the "dribble" (also known as the "minisaga", 50 words);[2]the "drabble" (also known as "microfiction", 100 words);[2]"sudden fiction" (up to 750 words);[4]"flash fiction" (up to 1,000 words); and "microstory".[5] Some commentators have suggested that flash fiction possesses a unique literary quality in its ability to hint at or imply a larger story.[6] Flash fiction has roots going back to prehistory, recorded at origin of writing, includingfablesandparables, notablyAesop's Fablesin the west, andPanchatantraandJataka talesin India. Later examples include the tales ofNasreddin, andZenkoanssuch asThe Gateless Gate. In the United States, early forms of flash fiction can be found in the 19th century, notably in the figures ofWalt Whitman,Ambrose Bierce, andKate Chopin.[7] In the 1920s, flash fiction was referred to as the "short short story" and was associated withCosmopolitanmagazine, and in the 1930s, collected in anthologies such asThe American Short Short Story.[8] Somerset Maughamwas a notable proponent, with hisCosmopolitans: Very Short Stories(1936) being an early collection. In Japan, flash fiction was popularized in the post-war period particularly by Michio Tsuzuki(都筑道夫). In 1986, Jerome Stern at the Florida State University organized the World's Best Short-Short Story Contest for stories of fewer than 250 words.Michael Martone, the first winner, received $100 and a crate of Florida oranges as the prize.[9]TheSoutheast Reviewcontinues the contest but has increased the maximum to 500 words.[10]In 1996, Stern publishedMicro Fiction: an anthology of really short storiesdrawn, in part, from the contest.[11] It was not until 1992, however, that the term "flash fiction" came into use as a category/genre of fiction.[12][13]It was coined by James Thomas,[14]who together with Denise Thomas and Tom Hazuka edited the 1992 landmark anthology titledFlash Fiction: 72 Very Short Stories,[15]and was introduced by Thomas in his Introduction to that volume.[16][17]Since then the term has gained wide acceptance as a form, especially in the W. W. Norton Anthologies co-edited by Thomas:Flash Fiction America,Flash Fiction International,Flash Fiction Forward, andFlash Fiction: 72 Very Short Stories. In 2020, theHarry Ransom Centerat theUniversity of Texas at Austinestablished the first curated collection of flash fiction artifacts in the United States.[18] Practitioners have includedSaadiof Shiraz ("Gulistan of Sa'di"),Bolesław Prus,[5][19]Anton Chekhov,O. Henry,Franz Kafka,H. P. Lovecraft,Yasunari Kawabata,Ernest Hemingway,Julio Cortázar,Daniil Kharms,[20]Arthur C. Clarke,Richard Brautigan,Ray Bradbury,Kurt Vonnegut Jr.,Fredric Brown,John Cage,Philip K. Dick, andRobert Sheckley.[21] Hemingway also wrote 18 pieces of flash fiction that were included in his first short-story collection,In Our Time(1925). While it is often alleged that (to win a bet) he also wrote the flash fiction "For Sale, Baby Shoes, Never Worn", various iterations of the story date back to 1906, when Hemingway was only 7 years old, rendering his authorship implausible.[22][23] Also notable are the 62 "short-shorts" which compriseSeverance,the thematic collection byRobert Olen Butlerin which each story describes the remaining 90 seconds of conscious awareness within human heads which have been decapitated.[24] Contemporary English-speaking writers well known for their published flash fiction includeLydia Davis,David Gaffney,Robert Scotellaro, andNancy Stohlman,Sherrie Flick,Bruce Holland Rogers,Steve Almond,Barbara Henning,Grant Faulkner. Spanish-speaking literature has many authors of microstories, includingAugusto Monterroso("El Dinosaurio") andLuis Felipe Lomelí("El Emigrante"). Their microstories are some of the shortest ever written in that language. In Spain, authors ofmicrorrelatos(very short fictions) have includedAndrés Neuman,Ramón Gómez de la Serna,José Jiménez Lozano,Javier Tomeo,José María Merino,Juan José Millás, andÓscar Esquivias.[25]In his collectionLa mitad del diablo(Páginas de Espuma, 2006),Juan Pedro Aparicioincluded the one-word storyLuis XIV, which in its entirety reads: "Yo" ("I"). In Argentina, notable contemporary contributors to the genre have includedMarco Denevi,Luisa Valenzuela, andAna María Shua. The Italian writerItalo Calvinoconsciously searched for a short narrative form, drawing inspiration from Argentine writersJorge Luis BorgesandAdolfo Bioy Casaresand finding that Monterroso's was "the most perfect he could find"; "El dinosaurio", in turn, possibly inspired his "The Dinosaurs".[26] German-language authors ofKürzestgeschichten,influenced by brief narratives penned byBertolt BrechtandFranz Kafka, have includedPeter Bichsel,Heimito von Doderer,Günter Kunert, andHelmut Heißenbüttel. TheArabic-speaking world has produced a number of microstory authors, including theNobel Prize-winning Egyptian authorNaguib Mahfouz, whose bookEchoes of an Autobiographyis composed mainly of such stories. Other flash fiction writers in Arabic includeZakaria Tamer,Haidar Haidar, andLaila al-Othman. In the Russian-speaking world, the best known flash fiction author isLinor Goralik.[citation needed] In the southwesternIndian state, ofKeralaP. K. Parakkadavuis known for his many microstories in theMalayalam language.[27] Hungarian writerIstván Örkényis known (beside other works) for hisOne-Minute Stories.[28] A number of print journals dedicate themselves to flash fiction. These includeFlash: The International Short-Short Story Magazine.[29] Access to the Internet has enhanced an awareness of flash fiction, with online journals being devoted entirely to the style.[30] In a CNN article on the subject, the author remarked that the "democratization of communication offered by the Internet has made positive in-roads" in the specific area of flash fiction, and directly influenced the style's popularity.[31]The form is popular, with most online literary journals now publishing flash fiction. In summer 2017,The New Yorkerbegan running a series of flash fiction stories online every summer.[32]
https://en.wikipedia.org/wiki/Flash_fiction
Inquantum computing,Grover's algorithm, also known as thequantum search algorithm, is aquantum algorithmfor unstructured search that findswith high probabilitythe unique input to ablack boxfunction that produces a particular output value, using justO(N){\displaystyle O({\sqrt {N}})}evaluations of the function, whereN{\displaystyle N}is the size of the function'sdomain. It was devised byLov Groverin 1996.[1] The analogous problem in classical computation would have aquery complexityO(N){\displaystyle O(N)}(i.e., the function would have to be evaluatedO(N){\displaystyle O(N)}times: there is no better approach than trying out all input values one after the other, which, on average, takesN/2{\displaystyle N/2}steps).[1] Charles H. Bennett, Ethan Bernstein,Gilles Brassard, andUmesh Vaziraniproved that any quantum solution to the problem needs to evaluate the functionΩ(N){\displaystyle \Omega ({\sqrt {N}})}times, so Grover's algorithm isasymptotically optimal.[2]Since classical algorithms forNP-complete problemsrequire exponentially many steps, and Grover's algorithm provides at most a quadratic speedup over the classical solution for unstructured search, this suggests that Grover's algorithm by itself will not providepolynomial-timesolutions for NP-complete problems (as the square root of an exponential function is still an exponential, not a polynomial function).[3] Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable whenN{\displaystyle N}is large, and Grover's algorithm can be applied to speed up broad classes of algorithms.[3]Grover's algorithm couldbrute-forcea 128-bit symmetric cryptographic key in roughly 264iterations, or a 256-bit key in roughly 2128iterations. It may not be the case that Grover's algorithm poses a significantly increased risk to encryption over existing classical algorithms, however.[4] Grover's algorithm, along with variants likeamplitude amplification, can be used to speed up a broad range of algorithms.[5][6][7]In particular, algorithms for NP-complete problems which contain exhaustive search as a subroutine can be sped up by Grover's algorithm.[6]The current theoretical best algorithm, in terms of worst-case complexity, for3SATis one such example. Genericconstraint satisfaction problemsalso see quadratic speedups with Grover.[8]These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance. However, it is unclear whether Grover's algorithm could speed up best practical algorithms for these problems. Grover's algorithm can also give provable speedups for black-box problems inquantum query complexity, including element distinctness[9]and thecollision problem[10](solved with theBrassard–Høyer–Tapp algorithm). In these types of problems, one treats the oracle functionfas a database, and the goal is to use the quantum query to this function as few times as possible. Grover's algorithm essentially solves the task offunction inversion. Roughly speaking, if we have a functiony=f(x){\displaystyle y=f(x)}that can be evaluated on a quantum computer, Grover's algorithm allows us to calculatex{\displaystyle x}when giveny{\displaystyle y}. Consequently, Grover's algorithm gives broad asymptotic speed-ups to many kinds ofbrute-force attacksonsymmetric-key cryptography, includingcollision attacksandpre-image attacks.[11]However, this may not necessarily be the most efficient algorithm since, for example, thePollard's rho algorithmis able to find a collision inSHA-2more efficiently than Grover's algorithm.[12] Grover's original paper described the algorithm as a database search algorithm, and this description is still common. The database in this analogy is a table of all of the function's outputs, indexed by the corresponding input. However, this database is not represented explicitly. Instead, an oracle is invoked to evaluate an item by its index. Reading a full database item by item and converting it into such a representation may take a lot longer than Grover's search. To account for such effects, Grover's algorithm can be viewed as solving an equation orsatisfying a constraint. In such applications, the oracle is a way to check the constraint and is not related to the search algorithm. This separation usually prevents algorithmic optimizations, whereas conventional search algorithms often rely on such optimizations and avoid exhaustive search.[13]Fortunately, fast Grover's oracle implementation is possible for many constraint satisfaction and optimization problems.[14] The major barrier to instantiating a speedup from Grover's algorithm is that the quadratic speedup achieved is too modest to overcome the large overhead of near-term quantum computers.[15]However, later generations offault-tolerantquantum computers with better hardware performance may be able to realize these speedups for practical instances of data. As input for Grover's algorithm, suppose we have a functionf:{0,1,…,N−1}→{0,1}{\displaystyle f\colon \{0,1,\ldots ,N-1\}\to \{0,1\}}. In the "unstructured database" analogy, the domain represent indices to a database, andf(x) = 1if and only if the data thatxpoints to satisfies the search criterion. We additionally assume that only one index satisfiesf(x) = 1, and we call this indexω. Our goal is to identifyω. We can accessfwith asubroutine(sometimes called anoracle) in the form of aunitary operatorUωthat acts as follows: {Uω|x⟩=−|x⟩forx=ω, that is,f(x)=1,Uω|x⟩=|x⟩forx≠ω, that is,f(x)=0.{\displaystyle {\begin{cases}U_{\omega }|x\rangle =-|x\rangle &{\text{for }}x=\omega {\text{, that is, }}f(x)=1,\\U_{\omega }|x\rangle =|x\rangle &{\text{for }}x\neq \omega {\text{, that is, }}f(x)=0.\end{cases}}} This uses theN{\displaystyle N}-dimensionalstate spaceH{\displaystyle {\mathcal {H}}}, which is supplied by aregisterwithn=⌈log2⁡N⌉{\displaystyle n=\lceil \log _{2}N\rceil }qubits. This is often written as Uω|x⟩=(−1)f(x)|x⟩.{\displaystyle U_{\omega }|x\rangle =(-1)^{f(x)}|x\rangle .} Grover's algorithm outputsωwith probability at least1/2usingO(N){\displaystyle O({\sqrt {N}})}applications ofUω. This probability can be made arbitrarily large by running Grover's algorithm multiple times. If one runs Grover's algorithm untilωis found, theexpectednumber of applications is stillO(N){\displaystyle O({\sqrt {N}})}, since it will only be run twice on average. This section compares the above oracleUω{\displaystyle U_{\omega }}with an oracleUf{\displaystyle U_{f}}. Uωis different from the standardquantum oraclefor a functionf. This standard oracle, denoted here asUf, uses anancillary qubitsystem. The operation then represents an inversion (NOT gate) on the main system conditioned by the value off(x) from the ancillary system: {Uf|x⟩|y⟩=|x⟩|¬y⟩forx=ω, that is,f(x)=1,Uf|x⟩|y⟩=|x⟩|y⟩forx≠ω, that is,f(x)=0,{\displaystyle {\begin{cases}U_{f}|x\rangle |y\rangle =|x\rangle |\neg y\rangle &{\text{for }}x=\omega {\text{, that is, }}f(x)=1,\\U_{f}|x\rangle |y\rangle =|x\rangle |y\rangle &{\text{for }}x\neq \omega {\text{, that is, }}f(x)=0,\end{cases}}} or briefly, Uf|x⟩|y⟩=|x⟩|y⊕f(x)⟩.{\displaystyle U_{f}|x\rangle |y\rangle =|x\rangle |y\oplus f(x)\rangle .} These oracles are typically realized usinguncomputation. If we are givenUfas our oracle, then we can also implementUω, sinceUωisUfwhen the ancillary qubit is in the state|−⟩=12(|0⟩−|1⟩)=H|1⟩{\displaystyle |-\rangle ={\frac {1}{\sqrt {2}}}{\big (}|0\rangle -|1\rangle {\big )}=H|1\rangle }: Uf(|x⟩⊗|−⟩)=12(Uf|x⟩|0⟩−Uf|x⟩|1⟩)=12(|x⟩|0⊕f(x)⟩−|x⟩|1⊕f(x)⟩)={12(−|x⟩|0⟩+|x⟩|1⟩)iff(x)=1,12(|x⟩|0⟩−|x⟩|1⟩)iff(x)=0=(Uω|x⟩)⊗|−⟩{\displaystyle {\begin{aligned}U_{f}{\big (}|x\rangle \otimes |-\rangle {\big )}&={\frac {1}{\sqrt {2}}}\left(U_{f}|x\rangle |0\rangle -U_{f}|x\rangle |1\rangle \right)\\&={\frac {1}{\sqrt {2}}}\left(|x\rangle |0\oplus f(x)\rangle -|x\rangle |1\oplus f(x)\rangle \right)\\&={\begin{cases}{\frac {1}{\sqrt {2}}}\left(-|x\rangle |0\rangle +|x\rangle |1\rangle \right)&{\text{if }}f(x)=1,\\{\frac {1}{\sqrt {2}}}\left(|x\rangle |0\rangle -|x\rangle |1\rangle \right)&{\text{if }}f(x)=0\end{cases}}\\&=(U_{\omega }|x\rangle )\otimes |-\rangle \end{aligned}}} So, Grover's algorithm can be run regardless of which oracle is given.[3]IfUfis given, then we must maintain an additional qubit in the state|−⟩{\displaystyle |-\rangle }and applyUfin place ofUω. The steps of Grover's algorithm are given as follows: For the correctly chosen value ofr{\displaystyle r}, the output will be|ω⟩{\displaystyle |\omega \rangle }with probability approaching 1 forN≫ 1. Analysis shows that this eventual value forr(N){\displaystyle r(N)}satisfiesr(N)≤⌈π4N⌉{\displaystyle r(N)\leq {\Big \lceil }{\frac {\pi }{4}}{\sqrt {N}}{\Big \rceil }}. Implementing the steps for this algorithm can be done using a number of gates linear in the number of qubits.[3]Thus, the gate complexity of this algorithm isO(log⁡(N)r(N)){\displaystyle O(\log(N)r(N))}, orO(log⁡(N)){\displaystyle O(\log(N))}per iteration. There is a geometric interpretation of Grover's algorithm, following from the observation that the quantum state of Grover's algorithm stays in a two-dimensional subspace after each step. Consider the plane spanned by|s⟩{\displaystyle |s\rangle }and|ω⟩{\displaystyle |\omega \rangle }; equivalently, the plane spanned by|ω⟩{\displaystyle |\omega \rangle }and the perpendicularket|s′⟩=1N−1∑x≠ω|x⟩{\displaystyle \textstyle |s'\rangle ={\frac {1}{\sqrt {N-1}}}\sum _{x\neq \omega }|x\rangle }. Grover's algorithm begins with the initial ket|s⟩{\displaystyle |s\rangle }, which lies in the subspace. The operatorUω{\displaystyle U_{\omega }}is a reflection at the hyperplane orthogonal to|ω⟩{\displaystyle |\omega \rangle }for vectors in the plane spanned by|s′⟩{\displaystyle |s'\rangle }and|ω⟩{\displaystyle |\omega \rangle }, i.e. it acts as a reflection across|s′⟩{\displaystyle |s'\rangle }. This can be seen by writingUω{\displaystyle U_{\omega }}in the form of aHouseholder reflection: Uω=I−2|ω⟩⟨ω|.{\displaystyle U_{\omega }=I-2|\omega \rangle \langle \omega |.} The operatorUs=2|s⟩⟨s|−I{\displaystyle U_{s}=2|s\rangle \langle s|-I}is a reflection through|s⟩{\displaystyle |s\rangle }. Both operatorsUs{\displaystyle U_{s}}andUω{\displaystyle U_{\omega }}take states in the plane spanned by|s′⟩{\displaystyle |s'\rangle }and|ω⟩{\displaystyle |\omega \rangle }to states in the plane. Therefore, Grover's algorithm stays in this plane for the entire algorithm. It is straightforward to check that the operatorUsUω{\displaystyle U_{s}U_{\omega }}of each Grover iteration step rotates the state vector by an angle ofθ=2arcsin⁡1N{\displaystyle \theta =2\arcsin {\tfrac {1}{\sqrt {N}}}}. So, with enough iterations, one can rotate from the initial state|s⟩{\displaystyle |s\rangle }to the desired output state|ω⟩{\displaystyle |\omega \rangle }. The initial ket is close to the state orthogonal to|ω⟩{\displaystyle |\omega \rangle }: ⟨s′|s⟩=N−1N.{\displaystyle \langle s'|s\rangle ={\sqrt {\frac {N-1}{N}}}.} In geometric terms, the angleθ/2{\displaystyle \theta /2}between|s⟩{\displaystyle |s\rangle }and|s′⟩{\displaystyle |s'\rangle }is given by sin⁡θ2=1N.{\displaystyle \sin {\frac {\theta }{2}}={\frac {1}{\sqrt {N}}}.} We need to stop when the state vector passes close to|ω⟩{\displaystyle |\omega \rangle }; after this, subsequent iterations rotate the state vectorawayfrom|ω⟩{\displaystyle |\omega \rangle }, reducing the probability of obtaining the correct answer. The exact probability of measuring the correct answer is sin2⁡((r+12)θ),{\displaystyle \sin ^{2}\left({\Big (}r+{\frac {1}{2}}{\Big )}\theta \right),} whereris the (integer) number of Grover iterations. The earliest time that we get a near-optimal measurement is thereforer≈πN/4{\displaystyle r\approx \pi {\sqrt {N}}/4}. To complete the algebraic analysis, we need to find out what happens when we repeatedly applyUsUω{\displaystyle U_{s}U_{\omega }}. A natural way to do this is by eigenvalue analysis of a matrix. Notice that during the entire computation, the state of the algorithm is a linear combination ofs{\displaystyle s}andω{\displaystyle \omega }. We can write the action ofUs{\displaystyle U_{s}}andUω{\displaystyle U_{\omega }}in the space spanned by{|s⟩,|ω⟩}{\displaystyle \{|s\rangle ,|\omega \rangle \}}as: Us:a|ω⟩+b|s⟩↦[|ω⟩|s⟩][−102/N1][ab].Uω:a|ω⟩+b|s⟩↦[|ω⟩|s⟩][−1−2/N01][ab].{\displaystyle {\begin{aligned}U_{s}:a|\omega \rangle +b|s\rangle &\mapsto [|\omega \rangle \,|s\rangle ]{\begin{bmatrix}-1&0\\2/{\sqrt {N}}&1\end{bmatrix}}{\begin{bmatrix}a\\b\end{bmatrix}}.\\U_{\omega }:a|\omega \rangle +b|s\rangle &\mapsto [|\omega \rangle \,|s\rangle ]{\begin{bmatrix}-1&-2/{\sqrt {N}}\\0&1\end{bmatrix}}{\begin{bmatrix}a\\b\end{bmatrix}}.\end{aligned}}} So in the basis{|ω⟩,|s⟩}{\displaystyle \{|\omega \rangle ,|s\rangle \}}(which is neither orthogonal nor a basis of the whole space) the actionUsUω{\displaystyle U_{s}U_{\omega }}of applyingUω{\displaystyle U_{\omega }}followed byUs{\displaystyle U_{s}}is given by the matrix UsUω=[−102/N1][−1−2/N01]=[12/N−2/N1−4/N].{\displaystyle U_{s}U_{\omega }={\begin{bmatrix}-1&0\\2/{\sqrt {N}}&1\end{bmatrix}}{\begin{bmatrix}-1&-2/{\sqrt {N}}\\0&1\end{bmatrix}}={\begin{bmatrix}1&2/{\sqrt {N}}\\-2/{\sqrt {N}}&1-4/N\end{bmatrix}}.} This matrix happens to have a very convenientJordan form. If we definet=arcsin⁡(1/N){\displaystyle t=\arcsin(1/{\sqrt {N}})}, it is UsUω=M[e2it00e−2it]M−1{\displaystyle U_{s}U_{\omega }=M{\begin{bmatrix}e^{2it}&0\\0&e^{-2it}\end{bmatrix}}M^{-1}} whereM=[−iieite−it].{\displaystyle M={\begin{bmatrix}-i&i\\e^{it}&e^{-it}\end{bmatrix}}.} It follows thatr-th power of the matrix (corresponding toriterations) is (UsUω)r=M[e2rit00e−2rit]M−1.{\displaystyle (U_{s}U_{\omega })^{r}=M{\begin{bmatrix}e^{2rit}&0\\0&e^{-2rit}\end{bmatrix}}M^{-1}.} Using this form, we can use trigonometric identities to compute the probability of observingωafterriterations mentioned in the previous section, |[⟨ω|ω⟩⟨ω|s⟩](UsUω)r[01]|2=sin2⁡((2r+1)t).{\displaystyle \left|{\begin{bmatrix}\langle \omega |\omega \rangle &\langle \omega |s\rangle \end{bmatrix}}(U_{s}U_{\omega })^{r}{\begin{bmatrix}0\\1\end{bmatrix}}\right|^{2}=\sin ^{2}\left((2r+1)t\right).} Alternatively, one might reasonably imagine that a near-optimal time to distinguish would be when the angles 2rtand −2rtare as far apart as possible, which corresponds to2rt≈π/2{\displaystyle 2rt\approx \pi /2}, orr=π/4t=π/4arcsin⁡(1/N)≈πN/4{\displaystyle r=\pi /4t=\pi /4\arcsin(1/{\sqrt {N}})\approx \pi {\sqrt {N}}/4}. Then the system is in state [|ω⟩|s⟩](UsUω)r[01]≈[|ω⟩|s⟩]M[i00−i]M−1[01]=|ω⟩1cos⁡(t)−|s⟩sin⁡(t)cos⁡(t).{\displaystyle [|\omega \rangle \,|s\rangle ](U_{s}U_{\omega })^{r}{\begin{bmatrix}0\\1\end{bmatrix}}\approx [|\omega \rangle \,|s\rangle ]M{\begin{bmatrix}i&0\\0&-i\end{bmatrix}}M^{-1}{\begin{bmatrix}0\\1\end{bmatrix}}=|\omega \rangle {\frac {1}{\cos(t)}}-|s\rangle {\frac {\sin(t)}{\cos(t)}}.} A short calculation now shows that the observation yields the correct answerωwith errorO(1N){\displaystyle O\left({\frac {1}{N}}\right)}. If, instead of 1 matching entry, there arekmatching entries, the same algorithm works, but the number of iterations must beπ4(Nk)1/2{\textstyle {\frac {\pi }{4}}{\left({\frac {N}{k}}\right)^{1/2}}}instead ofπ4N1/2.{\textstyle {\frac {\pi }{4}}{N^{1/2}}.} There are several ways to handle the case ifkis unknown.[16]A simple solution performs optimally up to a constant factor: run Grover's algorithm repeatedly for increasingly small values ofk, e.g., takingk=N,N/2,N/4, ..., and so on, takingk=N/2t{\displaystyle k=N/2^{t}}for iterationtuntil a matching entry is found. With sufficiently high probability, a marked entry will be found by iterationt=log2⁡(N/k)+c{\displaystyle t=\log _{2}(N/k)+c}for some constantc. Thus, the total number of iterations taken is at most π4(1+2+4+⋯+Nk2c)=O(N/k).{\displaystyle {\frac {\pi }{4}}{\Big (}1+{\sqrt {2}}+{\sqrt {4}}+\cdots +{\sqrt {\frac {N}{k2^{c}}}}{\Big )}=O{\big (}{\sqrt {N/k}}{\big )}.} Another approach ifkis unknown is to derive it via thequantum counting algorithmprior. Ifk=N/2{\displaystyle k=N/2}(or the traditional one marked state Grover's Algorithm if run withN=2{\displaystyle N=2}), the algorithm will provide no amplification. Ifk>N/2{\displaystyle k>N/2}, increasingkwill begin to increase the number of iterations necessary to obtain a solution.[17]On the other hand, ifk≥N/2{\displaystyle k\geq N/2}, a classical running of the checking oracle on a single random choice of input will more likely than not give a correct solution. A version of this algorithm is used in order to solve thecollision problem.[18][19] A modification of Grover's algorithm called quantum partial search was described by Grover and Radhakrishnan in 2004.[20]In partial search, one is not interested in finding the exact address of the target item, only the first few digits of the address. Equivalently, we can think of "chunking" the search space into blocks, and then asking "in which block is the target item?". In many applications, such a search yields enough information if the target address contains the information wanted. For instance, to use the example given by L. K. Grover, if one has a list of students organized by class rank, we may only be interested in whether a student is in the lower 25%, 25–50%, 50–75% or 75–100% percentile. To describe partial search, we consider a database separated intoK{\displaystyle K}blocks, each of sizeb=N/K{\displaystyle b=N/K}. The partial search problem is easier. Consider the approach we would take classically – we pick one block at random, and then perform a normal search through the rest of the blocks (in set theory language, the complement). If we do not find the target, then we know it is in the block we did not search. The average number of iterations drops fromN/2{\displaystyle N/2}to(N−b)/2{\displaystyle (N-b)/2}. Grover's algorithm requiresπ4N{\textstyle {\frac {\pi }{4}}{\sqrt {N}}}iterations. Partial search will be faster by a numerical factor that depends on the number of blocksK{\displaystyle K}. Partial search usesn1{\displaystyle n_{1}}global iterations andn2{\displaystyle n_{2}}local iterations. The global Grover operator is designatedG1{\displaystyle G_{1}}and the local Grover operator is designatedG2{\displaystyle G_{2}}. The global Grover operator acts on the blocks. Essentially, it is given as follows: The optimal values ofj1{\displaystyle j_{1}}andj2{\displaystyle j_{2}}are discussed in the paper by Grover and Radhakrishnan. One might also wonder what happens if one applies successive partial searches at different levels of "resolution". This idea was studied in detail byVladimir Korepinand Xu, who called it binary quantum search. They proved that it is not in fact any faster than performing a single partial search. Grover's algorithm is optimal up to sub-constant factors. That is, any algorithm that accesses the database only by using the operatorUωmust applyUωat least a1−o(1){\displaystyle 1-o(1)}fraction as many times as Grover's algorithm.[21]The extension of Grover's algorithm tokmatching entries,π(N/k)1/2/4, is also optimal.[18]This result is important in understanding the limits of quantum computation. If the Grover's search problem was solvable withlogcNapplications ofUω, that would imply thatNPis contained inBQP, by transforming problems in NP into Grover-type search problems. The optimality of Grover's algorithm suggests that quantum computers cannot solveNP-Completeproblems in polynomial time, and thus NP is not contained in BQP. It has been shown that a class of non-localhidden variablequantum computers could implement a search of anN{\displaystyle N}-item database in at mostO(N3){\displaystyle O({\sqrt[{3}]{N}})}steps. This is faster than theO(N){\displaystyle O({\sqrt {N}})}steps taken by Grover's algorithm.[22]
https://en.wikipedia.org/wiki/Grover%27s_algorithm
Covariance matrix adaptation evolution strategy (CMA-ES)is a particular kind of strategy fornumerical optimization.Evolution strategies(ES) arestochastic,derivative-free methodsfornumerical optimizationof non-linearor non-convexcontinuous optimizationproblems. They belong to the class ofevolutionary algorithmsandevolutionary computation. Anevolutionary algorithmis broadly based on the principle ofbiological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions, denoted asx{\displaystyle x}) are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness orobjective functionvaluef(x){\displaystyle f(x)}. Like this, individuals with better and betterf{\displaystyle f}-values are generated over the generation sequence. In anevolution strategy, new candidate solutions are usually sampled according to amultivariate normal distributioninRn{\displaystyle \mathbb {R} ^{n}}. Recombination amounts to selecting a new mean value for the distribution. Mutation amounts to adding a random vector, a perturbation with zero mean. Pairwise dependencies between the variables in the distribution are represented by acovariance matrix. The covariance matrix adaptation (CMA) is a method to update thecovariance matrixof this distribution. This is particularly useful if the functionf{\displaystyle f}isill-conditioned. Adaptation of thecovariance matrixamounts to learning a second order model of the underlyingobjective functionsimilar to the approximation of the inverseHessian matrixin thequasi-Newton methodin classicaloptimization. In contrast to most classical methods, fewer assumptions on the underlying objective function are made. Because only a ranking (or, equivalently, sorting) of candidate solutions is exploited, neither derivatives nor even an (explicit) objective function is required by the method. For example, the ranking could come about from pairwise competitions between the candidate solutions in aSwiss-system tournament. Two main principles for the adaptation of parameters of the search distribution are exploited in the CMA-ES algorithm. First, amaximum-likelihoodprinciple, based on the idea to increase the probability of successful candidate solutions and search steps. The mean of the distribution is updated such that thelikelihoodof previously successful candidate solutions is maximized. Thecovariance matrixof the distribution is updated (incrementally) such that the likelihood of previously successful search steps is increased. Both updates can be interpreted as anatural gradientdescent. Also, in consequence, the CMA conducts an iteratedprincipal components analysisof successful search steps while retainingallprincipal axes.Estimation of distribution algorithmsand theCross-Entropy Methodare based on very similar ideas, but estimate (non-incrementally) the covariance matrix by maximizing the likelihood of successful solutionpointsinstead of successful searchsteps. Second, two paths of the time evolution of the distribution mean of the strategy are recorded, called search or evolution paths. These paths contain significant information about the correlation between consecutive steps. Specifically, if consecutive steps are taken in a similar direction, the evolution paths become long. The evolution paths are exploited in two ways. One path is used for the covariance matrix adaptation procedure in place of single successful search steps and facilitates a possibly much faster variance increase of favorable directions. The other path is used to conduct an additional step-size control. This step-size control aims to make consecutive movements of the distribution mean orthogonal in expectation. The step-size control effectively preventspremature convergenceyet allowing fast convergence to an optimum. In the following the most commonly used (μ/μw,λ)-CMA-ES is outlined, where in each iteration step a weighted combination of theμbest out ofλnew candidate solutions is used to update the distribution parameters. The main loop consists of three main parts: 1) sampling of new solutions, 2) re-ordering of the sampled solutions based on their fitness, 3) update of the internal state variables based on the re-ordered samples. Apseudocodeof the algorithm looks as follows. The order of the five update assignments is relevant:m{\displaystyle m}must be updated first,pσ{\displaystyle p_{\sigma }}andpc{\displaystyle p_{c}}must be updated beforeC{\displaystyle C}, andσ{\displaystyle \sigma }must be updated last. The update equations for the five state variables are specified in the following. Given are the search space dimensionn{\displaystyle n}and the iteration stepk{\displaystyle k}. The five state variables are The iteration starts with samplingλ>1{\displaystyle \lambda >1}candidate solutionsxi∈Rn{\displaystyle x_{i}\in \mathbb {R} ^{n}}from amultivariate normal distributionN(mk,σk2Ck){\displaystyle \textstyle {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})}, i.e. fori=1,…,λ{\displaystyle i=1,\ldots ,\lambda } xi∼N(mk,σk2Ck)∼mk+σk×N(0,Ck){\displaystyle {\begin{aligned}x_{i}\ &\sim \ {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})\\&\sim \ m_{k}+\sigma _{k}\times {\mathcal {N}}(0,C_{k})\end{aligned}}} The second line suggests the interpretation as unbiased perturbation (mutation) of the current favorite solution vectormk{\displaystyle m_{k}}(the distribution mean vector). The candidate solutionsxi{\displaystyle x_{i}}are evaluated on the objective functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }to be minimized. Denoting thef{\displaystyle f}-sorted candidate solutions as {xi:λ∣i=1…λ}={xi∣i=1…λ}andf(x1:λ)≤⋯≤f(xμ:λ)≤f(xμ+1:λ)≤⋯,{\displaystyle \{x_{i:\lambda }\mid i=1\dots \lambda \}=\{x_{i}\mid i=1\dots \lambda \}{\text{ and }}f(x_{1:\lambda })\leq \dots \leq f(x_{\mu :\lambda })\leq f(x_{\mu +1:\lambda })\leq \cdots ,} the new mean value is computed as mk+1=∑i=1μwixi:λ=mk+∑i=1μwi(xi:λ−mk){\displaystyle {\begin{aligned}m_{k+1}&=\sum _{i=1}^{\mu }w_{i}\,x_{i:\lambda }\\&=m_{k}+\sum _{i=1}^{\mu }w_{i}\,(x_{i:\lambda }-m_{k})\end{aligned}}} where the positive (recombination) weightsw1≥w2≥⋯≥wμ>0{\displaystyle w_{1}\geq w_{2}\geq \dots \geq w_{\mu }>0}sum to one. Typically,μ≤λ/2{\displaystyle \mu \leq \lambda /2}and the weights are chosen such thatμw:=1/∑i=1μwi2≈λ/4{\displaystyle \textstyle \mu _{w}:=1/\sum _{i=1}^{\mu }w_{i}^{2}\approx \lambda /4}. The only feedback used from the objective function here and in the following is an ordering of the sampled candidate solutions due to the indicesi:λ{\displaystyle i:\lambda }. The step-sizeσk{\displaystyle \sigma _{k}}is updated usingcumulative step-size adaptation(CSA), sometimes also denoted aspath length control. The evolution path (or search path)pσ{\displaystyle p_{\sigma }}is updated first. pσ←(1−cσ)⏟discount factorpσ+1−(1−cσ)2⏞complements for discounted varianceμwCk−1/2mk+1−mk⏞displacement ofmσk⏟distributed asN(0,I)under neutral selection{\displaystyle p_{\sigma }\gets \underbrace {(1-c_{\sigma })} _{\!\!\!\!\!{\text{discount factor}}\!\!\!\!\!}\,p_{\sigma }+\overbrace {\sqrt {1-(1-c_{\sigma })^{2}}} ^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{complements for discounted variance}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\underbrace {{\sqrt {\mu _{w}}}\,C_{k}^{\;-1/2}\,{\frac {\overbrace {m_{k+1}-m_{k}} ^{\!\!\!{\text{displacement of }}m\!\!\!}}{\sigma _{k}}}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{distributed as }}{\mathcal {N}}(0,I){\text{ under neutral selection}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}}σk+1=σk×exp⁡(cσdσ(‖pσ‖E⁡‖N(0,I)‖−1)⏟unbiased about 0 under neutral selection){\displaystyle \sigma _{k+1}=\sigma _{k}\times \exp {\bigg (}{\frac {c_{\sigma }}{d_{\sigma }}}\underbrace {\left({\frac {\|p_{\sigma }\|}{\operatorname {E} \|{\mathcal {N}}(0,I)\|}}-1\right)} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{unbiased about 0 under neutral selection}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\bigg )}} where The step-sizeσk{\displaystyle \sigma _{k}}is increased if and only if‖pσ‖{\displaystyle \|p_{\sigma }\|}is larger than theexpected value E⁡‖N(0,I)‖=2Γ((n+1)/2)Γ(n/2)≈n(1−14n+121n2){\displaystyle {\begin{aligned}\operatorname {E} \|{\mathcal {N}}(0,I)\|&={\sqrt {2}}\;{\frac {\Gamma ((n+1)/2)}{\Gamma (n/2)}}\\[1ex]&\approx {\sqrt {n}}\,\left(1-{\frac {1}{4n}}+{\frac {1}{21\,n^{2}}}\right)\end{aligned}}} and decreased if it is smaller. For this reason, the step-size update tends to make consecutive stepsCk−1{\displaystyle C_{k}^{-1}}-conjugate, in that after the adaptation has been successful(mk+2−mk+1σk+1)TCk−1mk+1−mkσk≈0{\displaystyle \textstyle \left({\frac {m_{k+2}-m_{k+1}}{\sigma _{k+1}}}\right)^{T}\!C_{k}^{-1}{\frac {m_{k+1}-m_{k}}{\sigma _{k}}}\approx 0}.[1] Finally, thecovariance matrixis updated, where again the respective evolution path is updated first. pc←(1−cc)⏟discount factorpc+1[0,αn](‖pσ‖)⏟indicator function1−(1−cc)2⏞complements for discounted varianceμwmk+1−mkσk⏟distributed asN(0,Ck)under neutral selection{\displaystyle p_{c}\gets \underbrace {(1-c_{c})} _{\!\!\!\!\!{\text{discount factor}}\!\!\!\!\!}\,p_{c}+\underbrace {\mathbf {1} _{[0,\alpha {\sqrt {n}}]}(\|p_{\sigma }\|)} _{\text{indicator function}}\overbrace {\sqrt {1-(1-c_{c})^{2}}} ^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{complements for discounted variance}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\underbrace {{\sqrt {\mu _{w}}}\,{\frac {m_{k+1}-m_{k}}{\sigma _{k}}}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{distributed as}}\;{\mathcal {N}}(0,C_{k})\;{\text{under neutral selection}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}} Ck+1=(1−c1−cμ+cs)⏟discount factorCk+c1pcpcT⏟rank one matrix+cμ∑i=1μwixi:λ−mkσk(xi:λ−mkσk)T⏟rank⁡min(μ,n)matrix{\displaystyle C_{k+1}=\underbrace {(1-c_{1}-c_{\mu }+c_{s})} _{\!\!\!\!\!{\text{discount factor}}\!\!\!\!\!}\,C_{k}+c_{1}\underbrace {p_{c}p_{c}^{T}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{rank one matrix}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}+\,c_{\mu }\underbrace {\sum _{i=1}^{\mu }w_{i}{\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right)^{T}} _{\operatorname {rank} \min(\mu ,n){\text{ matrix}}}} whereT{\displaystyle T}denotes the transpose and Thecovariance matrixupdate tends to increase thelikelihoodforpc{\displaystyle p_{c}}and for(xi:λ−mk)/σk{\displaystyle (x_{i:\lambda }-m_{k})/\sigma _{k}}to be sampled fromN(0,Ck+1){\displaystyle {\mathcal {N}}(0,C_{k+1})}. This completes the iteration step. The number of candidate samples per iteration,λ{\displaystyle \lambda }, is not determined a priori and can vary in a wide range. Smaller values, for exampleλ=10{\displaystyle \lambda =10}, lead to more local search behavior. Larger values, for exampleλ=10n{\displaystyle \lambda =10n}with default valueμw≈λ/4{\displaystyle \mu _{w}\approx \lambda /4}, render the search more global. Sometimes the algorithm is repeatedly restarted with increasingλ{\displaystyle \lambda }by a factor of two for each restart.[2]Besides of settingλ{\displaystyle \lambda }(or possiblyμ{\displaystyle \mu }instead, if for exampleλ{\displaystyle \lambda }is predetermined by the number of available processors), the above introduced parameters are not specific to the given objective function and therefore not meant to be modified by the user. Given the distribution parameters—mean, variances and covariances—thenormal probability distributionfor sampling new candidate solutions is themaximum entropy probability distributionoverRn{\displaystyle \mathbb {R} ^{n}}, that is, the sample distribution with the minimal amount of prior information built into the distribution. More considerations on the update equations of CMA-ES are made in the following. The CMA-ES implements a stochasticvariable-metricmethod. In the very particular case of a convex-quadratic objective function f(x)=12(x−x∗)TH(x−x∗){\displaystyle f(x)={\textstyle {\frac {1}{2}}}(x-x^{*})^{T}H(x-x^{*})} the covariance matrixCk{\displaystyle C_{k}}adapts to the inverse of theHessian matrixH{\displaystyle H},up toa scalar factor and small random fluctuations. More general, also on the functiong∘f{\displaystyle g\circ f}, whereg{\displaystyle g}is strictly increasing and therefore order preserving, the covariance matrixCk{\displaystyle C_{k}}adapts toH−1{\displaystyle H^{-1}},up toa scalar factor and small random fluctuations. For selection ratioλ/μ→∞{\displaystyle \lambda /\mu \to \infty }(and hence population sizeλ→∞{\displaystyle \lambda \to \infty }), theμ{\displaystyle \mu }selected solutions yield an empirical covariance matrix reflective of the inverse-Hessian even in evolution strategies without adaptation of the covariance matrix. This result has been proven forμ=1{\displaystyle \mu =1}on a static model, relying on the quadratic approximation.[3] The update equations for mean and covariance matrix maximize alikelihoodwhile resembling anexpectation–maximization algorithm. The update of the mean vectorm{\displaystyle m}maximizes a log-likelihood, such that mk+1=arg⁡maxm∑i=1μwilog⁡pN(xi:λ∣m){\displaystyle m_{k+1}=\arg \max _{m}\sum _{i=1}^{\mu }w_{i}\log p_{\mathcal {N}}(x_{i:\lambda }\mid m)} where log⁡pN(x)=−12log⁡det(2πC)−12(x−m)TC−1(x−m){\displaystyle \log p_{\mathcal {N}}(x)=-{\tfrac {1}{2}}\log \det(2\pi C)-{\tfrac {1}{2}}(x-m)^{T}C^{-1}(x-m)} denotes the log-likelihood ofx{\displaystyle x}from a multivariate normal distribution with meanm{\displaystyle m}and any positive definite covariance matrixC{\displaystyle C}. To see thatmk+1{\displaystyle m_{k+1}}is independent ofC{\displaystyle C}remark first that this is the case for any diagonal matrixC{\displaystyle C}, because the coordinate-wise maximizer is independent of a scaling factor. Then, rotation of the data points or choosingC{\displaystyle C}non-diagonal are equivalent. The rank-μ{\displaystyle \mu }update of the covariance matrix, that is, the right most summand in the update equation ofCk{\displaystyle C_{k}}, maximizes a log-likelihood in that ∑i=1μwixi:λ−mkσk(xi:λ−mkσk)T=arg⁡maxC∑i=1μwilog⁡pN(xi:λ−mkσk|C){\displaystyle \sum _{i=1}^{\mu }w_{i}{\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right)^{T}=\arg \max _{C}\sum _{i=1}^{\mu }w_{i}\log p_{\mathcal {N}}\left(\left.{\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right|C\right)} forμ≥n{\displaystyle \mu \geq n}(otherwiseC{\displaystyle C}is singular, but substantially the same result holds forμ<n{\displaystyle \mu <n}). Here,pN(x|C){\displaystyle p_{\mathcal {N}}(x|C)}denotes the likelihood ofx{\displaystyle x}from a multivariate normal distribution with zero mean and covariance matrixC{\displaystyle C}. Therefore, forc1=0{\displaystyle c_{1}=0}andcμ=1{\displaystyle c_{\mu }=1},Ck+1{\displaystyle C_{k+1}}is the abovemaximum-likelihoodestimator. Seeestimation of covariance matricesfor details on the derivation. Akimotoet al.[4]and Glasmacherset al.[5]discovered independently that the update of the distribution parameters resembles the descent in direction of a samplednatural gradientof the expected objective function valueEf(x){\displaystyle Ef(x)}(to be minimized), where the expectation is taken under the sample distribution. With the parameter setting ofcσ=0{\displaystyle c_{\sigma }=0}andc1=0{\displaystyle c_{1}=0}, i.e. without step-size control and rank-one update, CMA-ES can thus be viewed as an instantiation ofNatural Evolution Strategies(NES).[4][5]Thenaturalgradientis independent of the parameterization of the distribution. Taken with respect to the parametersθof the sample distributionp, the gradient ofEf(x){\displaystyle Ef(x)}can be expressed as ∇θE(f(x)∣θ)=∇θ∫Rnf(x)p(x)dx=∫Rnf(x)∇θp(x)dx=∫Rnf(x)p(x)∇θln⁡p(x)dx=E⁡(f(x)∇θln⁡p(x∣θ)){\displaystyle {\begin{aligned}{\nabla }_{\!\theta }E(f(x)\mid \theta )&=\nabla _{\!\theta }\int _{\mathbb {R} ^{n}}f(x)p(x)\,\mathrm {d} x\\&=\int _{\mathbb {R} ^{n}}f(x)\nabla _{\!\theta }p(x)\,\mathrm {d} x\\&=\int _{\mathbb {R} ^{n}}f(x)p(x)\nabla _{\!\theta }\ln p(x)\,\mathrm {d} x\\&=\operatorname {E} (f(x)\nabla _{\!\theta }\ln p(x\mid \theta ))\end{aligned}}} wherep(x)=p(x∣θ){\displaystyle p(x)=p(x\mid \theta )}depends on the parameter vectorθ{\displaystyle \theta }. The so-calledscore function,∇θln⁡p(x∣θ)=∇θp(x)p(x){\displaystyle \nabla _{\!\theta }\ln p(x\mid \theta )={\frac {\nabla _{\!\theta }p(x)}{p(x)}}}, indicates the relative sensitivity ofpw.r.t.θ, and the expectation is taken with respect to the distributionp. ThenaturalgradientofEf(x){\displaystyle Ef(x)}, complying with theFisher information metric(an informational distance measure between probability distributions and the curvature of therelative entropy), now reads ∇~E⁡(f(x)∣θ)=Fθ−1∇θE⁡(f(x)∣θ){\displaystyle {\begin{aligned}{\tilde {\nabla }}\operatorname {E} (f(x)\mid \theta )&=F_{\theta }^{-1}\nabla _{\!\theta }\operatorname {E} (f(x)\mid \theta )\end{aligned}}} where theFisher informationmatrixFθ{\displaystyle F_{\theta }}is the expectation of theHessianof−lnpand renders the expression independent of the chosen parameterization. Combining the previous equalities we get ∇~E⁡(f(x)∣θ)=Fθ−1E⁡(f(x)∇θln⁡p(x∣θ))=E⁡(f(x)Fθ−1∇θln⁡p(x∣θ)){\displaystyle {\begin{aligned}{\tilde {\nabla }}\operatorname {E} (f(x)\mid \theta )&=F_{\theta }^{-1}\operatorname {E} (f(x)\nabla _{\!\theta }\ln p(x\mid \theta ))\\&=\operatorname {E} (f(x)F_{\theta }^{-1}\nabla _{\!\theta }\ln p(x\mid \theta ))\end{aligned}}} A Monte Carlo approximation of the latter expectation takes the average overλsamples fromp ∇~E^θ(f):=−∑i=1λwi⏞preference weightFθ−1∇θln⁡p(xi:λ∣θ)⏟candidate direction fromxi:λwithwi=−f(xi:λ)/λ{\displaystyle {\tilde {\nabla }}{\widehat {E}}_{\theta }(f):=-\sum _{i=1}^{\lambda }\overbrace {w_{i}} ^{\!\!\!\!{\text{preference weight}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\underbrace {F_{\theta }^{-1}\nabla _{\!\theta }\ln p(x_{i:\lambda }\mid \theta )} _{\!\!\!\!\!{\text{candidate direction from }}x_{i:\lambda }\!\!\!\!\!}\quad {\text{with }}w_{i}=-f(x_{i:\lambda })/\lambda } where the notationi:λ{\displaystyle i:\lambda }from above is used and thereforewi{\displaystyle w_{i}}are monotonically decreasing ini{\displaystyle i}. Ollivieret al.[6]finally found a rigorous derivation for the weights,wi{\displaystyle w_{i}}, as they are defined in the CMA-ES. The weights are anasymptotically consistent estimatorof theCDFoff(X){\displaystyle f(X)}at the points of thei{\displaystyle i}thorder statisticf(xi:λ){\displaystyle f(x_{i:\lambda })}, as defined above, whereX∼p(.|θ){\displaystyle X\sim p(.|\theta )}, composed with a fixed monotonically decreasing transformationw{\displaystyle w}, that is, wi=w(rank(f(xi:λ))−1/2λ).{\displaystyle w_{i}=w\left({\frac {{\mathsf {rank}}(f(x_{i:\lambda }))-1/2}{\lambda }}\right).} These weights make the algorithm insensitive to the specificf{\displaystyle f}-values. More concisely, using theCDFestimator off{\displaystyle f}instead off{\displaystyle f}itself let the algorithm only depend on the ranking off{\displaystyle f}-values but not on their underlying distribution. This renders the algorithm invariant to strictly increasingf{\displaystyle f}-transformations. Now we define θ=[mkTvec⁡(Ck)Tσk]T∈Rn+n2+1{\displaystyle \theta =[m_{k}^{T}\operatorname {vec} (C_{k})^{T}\sigma _{k}]^{T}\in \mathbb {R} ^{n+n^{2}+1}} such thatp(⋅∣θ){\displaystyle p(\cdot \mid \theta )}is the density of themultivariate normal distributionN(mk,σk2Ck){\displaystyle {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})}. Then, we have an explicit expression for the inverse of the Fisher information matrix whereσk{\displaystyle \sigma _{k}}is fixed Fθ∣σk−1=[σk2Ck002Ck⊗Ck]{\displaystyle F_{\theta \mid \sigma _{k}}^{-1}=\left[{\begin{array}{cc}\sigma _{k}^{2}C_{k}&0\\0&2C_{k}\otimes C_{k}\end{array}}\right]} and for ln⁡p(x∣θ)=ln⁡p(x∣mk,σk2Ck)=−12(x−mk)Tσk−2Ck−1(x−mk)−12ln⁡det(2πσk2Ck){\displaystyle {\begin{aligned}\ln p(x\mid \theta )&=\ln p(x\mid m_{k},\sigma _{k}^{2}C_{k})\\[1ex]&=-{\tfrac {1}{2}}(x-m_{k})^{T}\sigma _{k}^{-2}C_{k}^{-1}(x-m_{k})-{\tfrac {1}{2}}\ln \det(2\pi \sigma _{k}^{2}C_{k})\end{aligned}}} and, after some calculations, the updates in the CMA-ES turn out as[4] mk+1=mk−[∇~E^θ(f)]1,…,n⏟natural gradient for mean=mk+∑i=1λwi(xi:λ−mk){\displaystyle {\begin{aligned}m_{k+1}&=m_{k}-\underbrace {[{\tilde {\nabla }}{\widehat {E}}_{\theta }(f)]_{1,\dots ,n}} _{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{natural gradient for mean}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\\&=m_{k}+\sum _{i=1}^{\lambda }w_{i}(x_{i:\lambda }-m_{k})\end{aligned}}} and Ck+1=Ck+c1(pcpcT−Ck)−cμmat⁡([∇~E^θ(f)]n+1,…,n+n2⏞natural gradient for covariance matrix)=Ck+c1(pcpcT−Ck)+cμ∑i=1λwi(xi:λ−mkσk(xi:λ−mkσk)T−Ck){\displaystyle {\begin{aligned}C_{k+1}&=C_{k}+c_{1}(p_{c}p_{c}^{T}-C_{k})-c_{\mu }\operatorname {mat} (\overbrace {[{\tilde {\nabla }}{\widehat {E}}_{\theta }(f)]_{n+1,\dots ,n+n^{2}}} ^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\text{natural gradient for covariance matrix}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!})\\&=C_{k}+c_{1}(p_{c}p_{c}^{T}-C_{k})+c_{\mu }\sum _{i=1}^{\lambda }w_{i}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\left({\frac {x_{i:\lambda }-m_{k}}{\sigma _{k}}}\right)^{T}-C_{k}\right)\end{aligned}}} where mat forms the proper matrix from the respective natural gradient sub-vector. That means, settingc1=cσ=0{\displaystyle c_{1}=c_{\sigma }=0}, the CMA-ES updates descend in direction of the approximation∇~E^θ(f){\displaystyle {\tilde {\nabla }}{\widehat {E}}_{\theta }(f)}of the natural gradient while using different step-sizes (learning rates 1 andcμ{\displaystyle c_{\mu }}) for theorthogonal parametersm{\displaystyle m}andC{\displaystyle C}respectively. More recent versions allow a different learning rate for the meanm{\displaystyle m}as well.[7]The most recent version of CMA-ES also use a different functionw{\displaystyle w}form{\displaystyle m}andC{\displaystyle C}with negative values only for the latter (so-called active CMA). It is comparatively easy to see that the update equations of CMA-ES satisfy some stationarity conditions, in that they are essentially unbiased. Under neutral selection, wherexi:λ∼N(mk,σk2Ck){\displaystyle x_{i:\lambda }\sim {\mathcal {N}}(m_{k},\sigma _{k}^{2}C_{k})}, we find that E⁡(mk+1∣mk)=mk{\displaystyle \operatorname {E} (m_{k+1}\mid m_{k})=m_{k}} and under some mild additional assumptions on the initial conditions E⁡(log⁡σk+1∣σk)=log⁡σk{\displaystyle \operatorname {E} (\log \sigma _{k+1}\mid \sigma _{k})=\log \sigma _{k}} and with an additional minor correction in the covariance matrix update for the case where the indicator function evaluates to zero, we find E⁡(Ck+1∣Ck)=Ck{\displaystyle \operatorname {E} (C_{k+1}\mid C_{k})=C_{k}} Invariance propertiesimply uniform performance on a class of objective functions. They have been argued to be an advantage, because they allow to generalize and predict the behavior of the algorithm and therefore strengthen the meaning of empirical results obtained on single functions. The following invariance properties have been established for CMA-ES. Any serious parameter optimization method should be translation invariant, but most methods do not exhibit all the above described invariance properties. A prominent example with the same invariance properties is theNelder–Mead method, where the initial simplex must be chosen respectively. Conceptual considerations like the scale-invariance property of the algorithm, the analysis of simplerevolution strategies, and overwhelming empirical evidence suggest that the algorithm converges on a large class of functions fast to the global optimum, denoted asx∗{\displaystyle x^{*}}. On some functions, convergence occurs independently of the initial conditions with probability one. On some functions the probability is smaller than one and typically depends on the initialm0{\displaystyle m_{0}}andσ0{\displaystyle \sigma _{0}}. Empirically, the fastest possible convergence rate ink{\displaystyle k}for rank-based direct search methods can often be observed (depending on the context denoted aslinear convergenceorlog-linearorexponentialconvergence). Informally, we can write ‖mk−x∗‖≈‖m0−x∗‖×e−ck{\displaystyle \|m_{k}-x^{*}\|\;\approx \;\|m_{0}-x^{*}\|\times e^{-ck}} for somec>0{\displaystyle c>0}, and more rigorously 1k∑i=1klog⁡‖mi−x∗‖‖mi−1−x∗‖=1klog⁡‖mk−x∗‖‖m0−x∗‖→−c<0fork→∞,{\displaystyle {\frac {1}{k}}\sum _{i=1}^{k}\log {\frac {\|m_{i}-x^{*}\|}{\|m_{i-1}-x^{*}\|}}\;=\;{\frac {1}{k}}\log {\frac {\|m_{k}-x^{*}\|}{\|m_{0}-x^{*}\|}}\;\to \;-c<0\quad {\text{for }}k\to \infty \;,} or similarly, E⁡log⁡‖mk−x∗‖‖mk−1−x∗‖→−c<0fork→∞.{\displaystyle \operatorname {E} \log {\frac {\|m_{k}-x^{*}\|}{\|m_{k-1}-x^{*}\|}}\;\to \;-c<0\quad {\text{for }}k\to \infty \;.} This means that on average the distance to the optimum decreases in each iteration by a "constant" factor, namely byexp⁡(−c){\displaystyle \exp(-c)}. The convergence ratec{\displaystyle c}is roughly0.1λ/n{\displaystyle 0.1\lambda /n}, givenλ{\displaystyle \lambda }is not much larger than the dimensionn{\displaystyle n}. Even with optimalσ{\displaystyle \sigma }andC{\displaystyle C}, the convergence ratec{\displaystyle c}cannot largely exceed0.25λ/n{\displaystyle 0.25\lambda /n}, given the above recombination weightswi{\displaystyle w_{i}}are all non-negative. The actual linear dependencies inλ{\displaystyle \lambda }andn{\displaystyle n}are remarkable and they are in both cases the best one can hope for in this kind of algorithm. Yet, a rigorous proof of convergence is missing. Using a non-identity covariance matrix for themultivariate normal distributioninevolution strategiesis equivalent to a coordinate system transformation of the solution vectors,[8]mainly because the sampling equation xi∼mk+σk×N(0,Ck)∼mk+σk×Ck1/2N(0,I){\displaystyle {\begin{aligned}x_{i}&\sim \ m_{k}+\sigma _{k}\times {\mathcal {N}}(0,C_{k})\\&\sim \ m_{k}+\sigma _{k}\times C_{k}^{1/2}{\mathcal {N}}(0,I)\end{aligned}}} can be equivalently expressed in an "encoded space" asCk−1/2xi⏟represented in the encode space∼Ck−1/2mk⏟+σk×N(0,I){\displaystyle \underbrace {C_{k}^{-1/2}x_{i}} _{{\text{represented in the encode space}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}\sim \ \underbrace {C_{k}^{-1/2}m_{k}} {}+\sigma _{k}\times {\mathcal {N}}(0,I)} The covariance matrix defines abijectivetransformation (encoding) for all solution vectors into a space, where the sampling takes place with identity covariance matrix. Because the update equations in the CMA-ES are invariant under linear coordinate system transformations, the CMA-ES can be re-written as an adaptive encoding procedure applied to a simpleevolution strategywith identity covariance matrix.[8]This adaptive encoding procedure is not confined to algorithms that sample from a multivariate normal distribution (like evolution strategies), but can in principle be applied to any iterative search method. In contrast to most otherevolutionary algorithms, the CMA-ES is, from the user's perspective, quasi-parameter-free. The user has to choose an initial solution point,m0∈Rn{\displaystyle m_{0}\in \mathbb {R} ^{n}}, and the initial step-size,σ0>0{\displaystyle \sigma _{0}>0}. Optionally, the number of candidate samples λ (population size) can be modified by the user in order to change the characteristic search behavior (see above) and termination conditions can or should be adjusted to the problem at hand. The CMA-ES has been empirically successful in hundreds of applications and is considered to be useful in particular on non-convex, non-separable, ill-conditioned, multi-modal or noisy objective functions.[9]One survey of Black-Box optimizations found it outranked 31 other optimization algorithms, performing especially strongly on "difficult functions" or larger-dimensional search spaces.[10] The search space dimension ranges typically between two and a few hundred. Assuming a black-box optimization scenario, where gradients are not available (or not useful) and function evaluations are the only considered cost of search, the CMA-ES method is likely to be outperformed by other methods in the following conditions: On separable functions, the performance disadvantage is likely to be most significant in that CMA-ES might not be able to find at all comparable solutions. On the other hand, on non-separable functions that are ill-conditioned or rugged or can only be solved with more than100n{\displaystyle 100n}function evaluations, the CMA-ES shows most often superior performance. The (1+1)-CMA-ES[11]generates only one candidate solution per iteration step which becomes the new distribution mean if it is better than the current mean. Forcc=1{\displaystyle c_{c}=1}the (1+1)-CMA-ES is a close variant ofGaussian adaptation. SomeNatural Evolution Strategiesare close variants of the CMA-ES with specific parameter settings. Natural Evolution Strategies do not utilize evolution paths (that means in CMA-ES settingcc=cσ=1{\displaystyle c_{c}=c_{\sigma }=1}) and they formalize the update of variances and covariances on aCholesky factorinstead of a covariance matrix. The CMA-ES has also been extended tomultiobjective optimizationas MO-CMA-ES.[12]Another remarkable extension has been the addition of a negative update of the covariance matrix with the so-called active CMA.[13]Using the additional active CMA update is considered as the default variant nowadays.[7]
https://en.wikipedia.org/wiki/CMA-ES
Theapproximation errorin a given data value represents the significant discrepancy that arises when an exact, true value is compared against someapproximationderived for it. This inherent error in approximation can be quantified and expressed in two principal ways: as anabsolute error, which denotes the direct numerical magnitude of this discrepancy irrespective of the true value's scale, or as arelative error, which provides a scaled measure of the error by considering the absolute error in proportion to the exact data value, thus offering a context-dependent assessment of the error's significance. An approximation error can manifest due to a multitude of diverse reasons. Prominent among these are limitations related to computingmachine precision, where digital systems cannot represent all real numbers with perfect accuracy, leading to unavoidable truncation or rounding. Another common source is inherentmeasurement error, stemming from the practical limitations of instruments, environmental factors, or observational processes (for instance, if the actual length of a piece of paper is precisely 4.53 cm, but the measuring ruler only permits an estimation to the nearest 0.1 cm, this constraint could lead to a recorded measurement of 4.5 cm, thereby introducing an error). In themathematicalfield ofnumerical analysis, the crucial concept ofnumerical stabilityassociated with analgorithmserves to indicate the extent to which initial errors or perturbations present in the input data of the algorithm are likely to propagate and potentially amplify into substantial errors in the final output. Algorithms that are characterized as numerically stable are robust in the sense that they do not yield a significantly magnified error in their output even when the input is slightly malformed or contains minor inaccuracies; conversely, numerically unstable algorithms may exhibit dramatic error growth from small input changes, rendering their results unreliable.[1] Given some true or exact valuev, we formally state that an approximationvapproxestimates or representsvwhere the magnitude of theabsolute erroris bounded by a positive valueε(i.e.,ε>0), if the following inequality holds:[2][3] where the vertical bars, | |, unambiguously denote theabsolute valueof the difference between the true valuevand its approximationvapprox. This mathematical operation signifies the magnitude of the error, irrespective of whether the approximation is an overestimate or an underestimate. Similarly, we state thatvapproxapproximates the valuevwhere the magnitude of therelative erroris bounded by a positive valueη(i.e.,η>0), providedvis not zero (v≠ 0), if the subsequent inequality is satisfied: |v−vapprox|≤η⋅|v|{\displaystyle |v-v_{\text{approx}}|\leq \eta \cdot |v|}. This definition ensures thatηacts as an upper bound on the ratio of the absolute error to the magnitude of the true value. Ifv≠ 0, then the actualrelative error, often also denoted byηin context (representing the calculated value rather than a bound), is precisely calculated as: Note that the first term in the equation above implicitly defines `ε` as `|v-v_approx|` if `η` is `ε/|v|`. Thepercent error, often denoted asδ, is a common and intuitive way of expressing the relative error, effectively scaling the relative error value to a percentage for easier interpretation and comparison across different contexts:[3] Anerror boundrigorously defines an established upper limit on either the relative or the absolute magnitude of an approximation error. Such a bound thereby provides a formal guarantee on the maximum possible deviation of the approximation from the true value, which is critical in applications requiring known levels of precision.[4] To illustrate these concepts with a numerical example, consider an instance where the exact, accepted value is 50, and its corresponding approximation is determined to be 49.9. In this particular scenario, the absolute error is precisely 0.1 (calculated as |50 − 49.9|), and the relative error is calculated as the absolute error 0.1 divided by the true value 50, which equals 0.002. This relative error can also be expressed as 0.2%. In a more practical setting, such as when measuring the volume of liquid in a 6 mL beaker, if the instrument reading indicates 5 mL while the true volume is actually 6 mL, the percent error for this particular measurement situation is, when rounded to one decimal place, approximately 16.7% (calculated as |(6 mL − 5 mL) / 6 mL| × 100%). The utility of relative error becomes particularly evident when it is employed to compare the quality of approximations for numbers that possess widely differing magnitudes; for example, approximating the number 1,000 with an absolute error of 3 results in a relative error of 0.003 (or 0.3%). This is, within the context of most scientific or engineering applications, considered a significantly less accurate approximation than approximating the much larger number 1,000,000 with an identical absolute error of 3. In the latter case, the relative error is a mere 0.000003 (or 0.0003%). In the first case, the relative error is 0.003, whereas in the second, more favorable scenario, it is a substantially smaller value of only 0.000003. This comparison clearly highlights how relative error provides a more meaningful and contextually appropriate assessment of precision, especially when dealing with values across different orders of magnitude. There are two crucial features or caveats associated with the interpretation and application of relative error that should always be kept in mind. Firstly, relative error becomes mathematically undefined whenever the true value (v) is zero, because this true value appears in the denominator of its calculation (as detailed in the formal definition provided above), and division by zero is an undefined operation. Secondly, the concept of relative error is most truly meaningful and consistently interpretable only when the measurements under consideration are performed on aratio scale. This type of scale is characterized by possessing a true, non-arbitrary zero point, which signifies the complete absence of the quantity being measured. If this condition of a ratio scale is not met (e.g., when using interval scales like Celsius temperature), the calculated relative error can become highly sensitive to the choice of measurement units, potentially leading to misleading interpretations. For example, when an absolute error in atemperaturemeasurement given in theCelsius scaleis 1 °C, and the true value is 2 °C, the relative error is 0.5 (or 50%, calculated as |1°C / 2°C|). However, if this exact same approximation, representing the same physical temperature difference, is made using theKelvin scale(which is a ratio scale where 0 K represents absolute zero), a 1 K absolute error (equivalent in magnitude to a 1 °C error) with the same true value of 275.15 K (which is equivalent to 2 °C) gives a markedly different relative error of approximately 0.00363, or about 3.63×10−3(calculated as |1 K / 275.15 K|). This disparity underscores the importance of the underlying measurement scale. When comparing the behavior and intrinsic characteristics of these two fundamental error types, it is important to recognize their differing sensitivities to common arithmetic operations. Specifically, statements and conclusions made aboutrelative errorsare notably sensitive to the addition of a non-zero constant to the underlying true and approximated values, as such an addition alters the base value against which the error is relativized, thereby changing the ratio. However, relative errors remain unaffected by the multiplication of both the true and approximated values by the same non-zero constant, because this constant would appear in both the numerator (of the absolute error) and the denominator (the true value) of the relative error calculation, and would consequently cancel out, leaving the relative error unchanged. Conversely, forabsolute errors, the opposite relationship holds true: absolute errors are directly sensitive to the multiplication of the underlying values by a constant (as this scales the magnitude of the difference itself), but they are largely insensitive to the addition of a constant to these values (since adding the same constant to both the true value and its approximation does not change the difference between them: (v+c) − (vapprox+c) =v−vapprox).[5]: 34 In the realm of computational complexity theory, we define that a real valuevispolynomially computable with absolute errorfrom a given input if, for any specified rational numberε> 0 representing the desired maximum permissible absolute error, it is algorithmically possible to compute a rational numbervapproxsuch thatvapproxapproximatesvwith an absolute error no greater thanε(formally, |v−vapprox| ≤ε). Crucially, this computation must be achievable within a time duration that is polynomial in terms of the size of the input data and the encoding size ofε(the latter typically being of the order O(log(1/ε)) bits, reflecting the number of bits needed to represent the precision). Analogously, the valuevis consideredpolynomially computable with relative errorif, for any specified rational numberη> 0 representing the desired maximum permissible relative error, it is possible to compute a rational numbervapproxthat approximatesvwith a relative error no greater thanη(formally, |(v−vapprox)/v| ≤η, assumingv≠ 0). This computation, similar to the absolute error case, must likewise be achievable in an amount of time that is polynomial in the size of the input data and the encoding size ofη(which is typically O(log(1/η)) bits). It can be demonstrated that if a valuevis polynomially computable with relative error (utilizing an algorithm that we can designate as REL), then it is consequently also polynomially computable with absolute error.Proof sketch: Letε> 0 be the target maximum absolute error that we wish to achieve. The procedure commences by invoking the REL algorithm with a chosen relative error bound of, for example,η= 1/2. This initial step aims to find a rational number approximationr1such that the inequality |v−r1| ≤ |v|/2 holds true. From this relationship, by applying the reverse triangle inequality (|v| − |r1| ≤ |v−r1|), we can deduce that |v| ≤ 2|r1| (this holds assumingr1≠ 0; ifr1= 0, then the relative error condition impliesvmust also be 0, in which case the problem of achieving any absolute errorε> 0 is trivial, asvapprox= 0 works, and we are done). Given that the REL algorithm operates in polynomial time, the encoding length of the computedr1will necessarily be polynomial with respect to the input size. Subsequently, the REL algorithm is invoked a second time, now with a new, typically much smaller, relative error target set toη'=ε/ (2|r1|) (this step also assumesr1is non-zero, which we can ensure or handle as a special case). This second application of REL yields another rational number approximation,r2, that satisfies the condition |v−r2| ≤η'|v|. Substituting the expression forη'gives |v−r2| ≤ (ε/ (2|r1|)) |v|. Now, using the previously derived inequality |v| ≤ 2|r1|, we can bound the term: |v−r2| ≤ (ε/ (2|r1|)) × (2|r1|) =ε. Thus, the approximationr2successfully approximatesvwith the desired absolute errorε, demonstrating that polynomial computability with relative error implies polynomial computability with absolute error.[5]: 34 The reverse implication, namely that polynomial computability with absolute error implies polynomial computability with relative error, is generally not true without imposing additional conditions or assumptions. However, a significant special case exists: if one can assume that some positive lower boundbon the magnitude ofv(i.e., |v| >b> 0) can itself be computed in polynomial time, and ifvis also known to be polynomially computable with absolute error (perhaps via an algorithm designated as ABS), thenvalso becomes polynomially computable with relative error. This is because one can simply invoke the ABS algorithm with a carefully chosen target absolute error, specificallyεtarget=ηb, whereηis the desired relative error. The resulting approximationvapproxwould satisfy |v−vapprox| ≤ηb. To see the implication for relative error, we divide by |v| (which is non-zero): |(v−vapprox)/v| ≤ (ηb)/|v|. Since we have the condition |v| >b, it follows thatb/|v| < 1. Therefore, the relative error is bounded byη× (b/|v|) <η× 1 =η, which is the desired outcome for polynomial computability with relative error. An algorithm that, for every given rational numberη> 0, successfully computes a rational numbervapproxthat approximatesvwith a relative error no greater thanη, and critically, does so in a time complexity that is polynomial in both the size of the input and in the reciprocal of the relative error, 1/η(rather than being polynomial merely in log(1/η), which typically allows for faster computation whenηis extremely small), is known as aFully Polynomial-Time Approximation Scheme (FPTAS). The dependence on 1/ηrather than log(1/η) is a defining characteristic of FPTAS and distinguishes it from weaker approximation schemes. In the context of most indicating measurement instruments, such as analog or digital voltmeters, pressure gauges, and thermometers, the specified accuracy is frequently guaranteed by their manufacturers as a certain percentage of the instrument's full-scale reading capability, rather than as a percentage of the actual reading. The defined boundaries or limits of these permissible deviations from the true or specified values under operational conditions are commonly referred to as limiting errors or, alternatively, guarantee errors. This method of specifying accuracy implies that the maximum possible absolute error can be larger when measuring values towards the higher end of the instrument's scale, while the relative error with respect to the full-scale value itself remains constant across the range. Consequently, the relative error with respect to the actual measured value can become quite large for readings at the lower end of the instrument's scale.[6] The fundamental definitions of absolute and relative error, as presented primarily for scalar (one-dimensional) values, can be naturally and rigorously extended to more complex scenarios where the quantity of interestv{\displaystyle v}and its corresponding approximationvapprox{\displaystyle v_{\text{approx}}}aren-dimensional vectors, matrices, or, more generally, elements of anormed vector space. This important generalization is typically achieved by systematically replacing theabsolute valuefunction (which effectively measures magnitude or "size" for scalar numbers) with an appropriatevectorn-normor matrix norm. Common examples of such norms include the L1norm (sum of absolute component values), the L2norm (Euclidean norm, or square root of the sum of squared components), and the L∞norm (maximum absolute component value). These norms provide a way to quantify the "distance" or "difference" between the true vector (or matrix) and its approximation in a multi-dimensional space, thereby allowing for analogous definitions of absolute and relative error in these higher-dimensional contexts.[7]
https://en.wikipedia.org/wiki/Percentage_error
reCAPTCHAInc.[1]is aCAPTCHAsystem owned byGoogle. It enables web hosts to distinguish between human and automated access to websites. The original version asked users to decipher hard-to-read text or match images. Version 2 also asked users to decipher text or match images if the analysis of cookies and canvas rendering suggested the page was being downloaded automatically.[2]Since version 3, reCAPTCHA will never interrupt users and is intended to run automatically when users load pages or click buttons.[3] The original iteration of the service was amass collaborationplatform designed for the digitization of books, particularly those that were too illegible to bescanned by computers. The verification prompts utilized pairs of words from scanned pages, with one known word used as a control for verification, and the second used tocrowdsourcethe reading of an uncertain word.[4]reCAPTCHA was originally developed byLuis von Ahn, David Abraham,Manuel Blum, Michael Crawford, Ben Maurer, Colin McMillen, and Edison Tan atCarnegie Mellon University'smainPittsburghcampus.[5]It was acquired byGooglein September 2009.[6]The system helped to digitize the archives ofThe New York Times, and was subsequently used byGoogle Booksfor similar purposes.[7] The system was reported as displaying over 100 million CAPTCHAs every day,[8]on sites such asFacebook, TicketMaster, Twitter,4chan,CNN.com,StumbleUpon,[9]Craigslist(since June 2008),[10]and the U.S. National Telecommunications and Information Administration'sdigital TV converter boxcoupon program website (as part of theUS DTV transition).[11] In 2014, Google pivoted the service away from its original concept, with a focus on reducing the amount of user interaction needed to verify a user, and only presenting human recognition challenges (such as identifying images in a set that satisfy a specific prompt) if behavioral analysis suspects that the user may be a bot. In October 2023, it was found that OpenAI'sGPT-4chatbot could solve CAPTCHAs.[12]The service has been criticized for lack of security and accessibility while collecting user data, with a 2023 study estimating the collective cost of human time spent solving CAPTCHAs as $6.1 billion in wages.[13] Distributed Proofreaderswas the first project to volunteer its time to decipher scanned text that could not be read byoptical character recognition(OCR) programs. It works withProject Gutenbergto digitizepublic domainmaterial and uses methods quite different from reCAPTCHA. The reCAPTCHA program originated with Guatemalancomputer scientistLuis von Ahn,[14]and was aided by aMacArthur Fellowship. An early CAPTCHA developer, he realized "he had unwittingly created a system that was frittering away, in ten-second increments, millions of hours of a most precious resource: human brain cycles".[15] Scanned text is subjected to analysis by two different OCRs. Any word that is deciphered differently by the two OCR programs or that is not in an English dictionary is marked as "suspicious" and converted into a CAPTCHA. The suspicious word is displayed, out of context, sometimes along with a control word already known. If the human types the control word correctly, then the response to the questionable word is accepted as probably valid. If enough users were to correctly type the control word, but incorrectly type the second word which OCR had failed to recognize, then the digital version of documents could end up containing the incorrect word. The identification performed by each OCR program is given a value of 0.5 points, and each interpretation by a human is given a full point. Once a given identification hits 2.5 points, the word is considered valid. Those words that are consistently given a single identity by human judges are later recycled as control words.[16]If the first three guesses match each other but do not match either of the OCRs, they are considered a correct answer, and the word becomes a control word.[17]When six users reject a word before any correct spelling is chosen, the word is discarded as unreadable.[17] The original reCAPTCHA method was designed to show the questionable words separately, as out-of-context correction, rather than in use, such as within a phrase of five words from the original document.[18]Also, the control word might mislead the context for the second word, such as a request of "/metal/ /fife/" being entered as "metalfile" due to the logical connection of filing with a metal tool being considered more common than the musical instrument "fife".[citation needed] In 2012, reCAPTCHA began using photographs taken fromGoogle Street Viewproject, in addition to scanned words.[19]It will ask the user to identify images of crosswalks, street lights, and other objects. It has been hypothesized that the data is used byWaymo(a Google subsidiary) to train autonomous vehicles, though an unnamed representative has denied this, claiming the data was only being used to improve Google Maps as of mid-2021.[20] Google charges for the use of reCAPTCHA on websites that make over a million reCAPTCHA queries a month.[21] reCAPTCHA v1 was declaredend-of-lifeand shut down on March 31, 2018.[22] In 2013, reCAPTCHA began implementingbehavioral analysisof the browser's interactions to predict whether the user was a human or a bot. The following year, Google began to deploy a new reCAPTCHA API, featuring the "no CAPTCHA reCAPTCHA"—where users deemed to be of low risk only need to click a singlecheckboxto verify their identity. A CAPTCHA may still be presented if the system is uncertain of the user's risk; Google also introduced a new type of CAPTCHA challenge designed to be more accessible to mobile users, where the user must select images matching a specific prompt from a grid.[2][23] In 2017, Google introduced a new "invisible" reCAPTCHA, where verification occurs in the background, and no challenges are displayed at all if the user is deemed to be of low risk.[24][25][26]According to former Google "click fraudczar"Shuman Ghosemajumder, this capability "creates a new sort of challenge that very advanced bots can still get around, but introduces a lot less friction to the legitimate human."[26] The reCAPTCHA tests are displayed from the central site of the reCAPTCHA project, which supplies the words to be deciphered. This is done through aJavaScriptAPIwith the server making a callback to reCAPTCHA after the request has been submitted. The reCAPTCHA project provides libraries for various programming languages and applications to make this process easier. reCAPTCHA is a free-of-charge service provided to websites for assistance with the decipherment,[27]but the reCAPTCHA software is notopen-source.[28] Also, reCAPTCHA offers plugins for several web-application platforms includingASP.NET,Ruby, andPHP, to ease the implementation of the service.[29] The main purpose of aCAPTCHAsystem is to block spambots while allowing human users. On December 14, 2009, Jonathan Wilkins released a paper describing weaknesses in reCAPTCHA that allowed bots to achieve a solve rate of 18%.[31][32][33] On August 1, 2010, Chad Houck gave a presentation to theDEF CON18 Hacking Conference detailing a method to reverse the distortion added to images which allowed a computer program to determine a valid response 10% of the time.[34][35]The reCAPTCHA system was modified on July 21, 2010, before Houck was to speak on his method. Houck modified his method to what he described as an "easier" CAPTCHA to determine a valid response 31.8% of the time. Houck also mentioned security defenses in the system, including a high-security lockout if an invalid response is given 32 times in a row.[36] On May 26, 2012, Adam, C-P, and Jeffball of DC949 gave a presentation at the LayerOne hacker conference detailing how they were able to achieve an automated solution with an accuracy rate of 99.1%.[37]Their tactic was to use techniques from machine learning, a subfield of artificial intelligence, to analyze the audio version of reCAPTCHA which is available for the visually impaired. Google released a new version of reCAPTCHA just hours before their talk, making major changes to both the audio and visual versions of their service. In this release, the audio version was increased in length from 8 seconds to 30 seconds and is much more difficult to understand, both for humans as well as bots. In response to this update and the following one, the members of DC949 released two more versions of Stiltwalker which beat reCAPTCHA with an accuracy of 60.95% and 59.4% respectively. After each successive break, Google updated reCAPTCHA within a few days. According to DC949, they often reverted to features that had been previously hacked. On June 27, 2012, Claudia Cruz, Fernando Uceda, and Leobardo Reyes published a paper showing a system running on reCAPTCHA images with an accuracy of 82%.[38]The authors have not said if their system can solve recent reCAPTCHA images, although they claim their work to beintelligent OCRand robust to some, if not all changes in the image database. In an August 2012 presentation given at BsidesLV 2012, DC949 called the latest version "unfathomably impossible for humans"—they were not able to solve them manually either.[37]The web accessibility organization WebAIM reported in May 2012, "Over 90% of respondents [screen reader users] find CAPTCHA to be very or somewhat difficult".[39] The original iteration of reCAPTCHA was criticized as being a source ofunpaid workto assist in transcribing efforts.[40] Google profits from reCAPTCHA users as free workers to improve its AI research.[41] A 13-month study published in 2023, "Dazed & Confused: A Large-Scale Real-World User Study of reCAPTCHAv2," found that reCAPTCHA provides little security against bots and is primarily a tool to track user data, and has cost society an estimated 819 million hours of unpaid human labor.[42][13] The current iteration of the system has been criticized for its reliance ontracking cookiesand promotion ofvendor lock-inwith Google services; administrators are encouraged to include reCAPTCHA tracking code on all pages of their website to analyze the behavior and "risk" of users, which determines the level of friction presented when a reCAPTCHA prompt is used.[43]Google stated in itsprivacy policythat user data collected in this manner is not used for personalized advertising. It was also discovered that the system favors those who have an activeGoogle accountlogin, and displays a higher risk towards those using anonymizing proxies and VPN services.[24] Concerns were raised regarding privacy when Google announced reCAPTCHA v3.0, as it allows Google to track users on non-Google websites.[24] In April 2020,Cloudflareswitched from reCAPTCHA to hCaptcha, citing privacy concerns over Google's potential use of the data they recollect through reCAPTCHA fortargeted advertising[44]and to cut down on operating costs since a considerable portion of Cloudflare's customers are non-paying customers. In response, Google toldPC Magazinethat the data from reCAPTCHA is never used for personalized advertising purposes.[21] Google's help center states that reCAPTCHA is notsupportedfor thedeafblindcommunity,[45]effectively locking such users out of all pages that use the service. However, reCAPTCHA does currently have the longest list of accessibility considerations of any CAPTCHA service.[46] In one of the variants of CAPTCHA challenges, images are not incrementally highlighted, but fade out when clicked, and replaced with a new image fading in, resemblingwhack-a-mole. Criticism has been aimed at the long duration taken for the images to fade out and in.[47] reCAPTCHA also created the Mailhide project, which protectsemail addresseson web pages from beingharvestedbyspammers.[48]By default, the email address was converted into a format that did not allow acrawlerto see the full email address; for example, "mailme@example.com" would have been converted to "mai...@example.com". The visitor would then click on the "..." and solve the CAPTCHA to obtain the full email address. One could also edit the pop-up code so that none of the addresses were visible. Mailhide was discontinued in 2018 because it relied on reCAPTCHA v1.[49]
https://en.wikipedia.org/wiki/ReCAPTCHA
TheInformation Age[a]is ahistorical periodthat began in the mid-20th century. It is characterized by a rapid shift from traditional industries, as established during theIndustrial Revolution, to an economy centered oninformation technology.[2]The onset of the Information Age has been linked to the development of thetransistorin 1947.[2]This technological advance has had a significant impact on the way information is processed and transmitted. According to theUnited Nations Public Administration Network, the Information Age was formed by capitalizing oncomputer miniaturizationadvances,[3]which led tomodernizedinformation systems and internet communications as the driving force ofsocial evolution.[4] There is ongoing debate concerning whether the Third Industrial Revolution has already ended, and if theFourth Industrial Revolutionhas already begun due to the recent breakthroughs in areas such asartificial intelligenceandbiotechnology.[5]This next transition has been theorized to harken the advent of theImagination Age, theInternet of things(IoT), and rapid advancements inmachine learning. The digital revolution converted technology from analog format to digital format. By doing this, it became possible to make copies that were identical to the original. In digital communications, for example, repeating hardware was able to amplify thedigital signaland pass it on with no loss of information in the signal. Of equal importance to the revolution was the ability to easily move the digital information between media, and to access or distribute it remotely. One turning point of the revolution was the change from analog to digitally recorded music.[6]During the 1980s the digital format of optical compact discs gradually replacedanalogformats, such asvinyl recordsandcassette tapes, as the popular medium of choice.[7] Humans have manufactured tools for counting and calculating since ancient times, such as theabacus,astrolabe,equatorium, and mechanical timekeeping devices. More complicated devices started appearing in the 1600s, including theslide ruleandmechanical calculators. By the early 1800s, theIndustrial Revolutionhad produced mass-market calculators like thearithmometerand the enabling technology of thepunch card.Charles Babbageproposed a mechanical general-purpose computer called theAnalytical Engine, but it was never successfully built, and was largely forgotten by the 20th century and unknown to most of the inventors of modern computers. TheSecond Industrial Revolutionin the last quarter of the 19th century developed useful electrical circuits and thetelegraph. In the 1880s,Herman Hollerithdeveloped electromechanical tabulating and calculating devices using punch cards andunit record equipment, which became widespread in business and government. Meanwhile, variousanalog computersystems used electrical, mechanical, or hydraulic systems to model problems and calculate answers. These included an 1872tide-predicting machine,differential analysers,perpetual calendarmachines, theDeltarfor water management in the Netherlands,network analyzersfor electrical systems, and various machines for aiming military guns and bombs. The construction of problem-specific analog computers continued in the late 1940s and beyond, withFERMIACfor neutron transport,Project Cyclonefor various military applications, and thePhillips Machinefor economic modeling. Building on the complexity of theZ1andZ2, German inventorKonrad Zuseused electromechanical systems to complete in 1941 theZ3, the world's first working programmable, fully automatic digital computer. Also during World War II, Allied engineers constructed electromechanicalbombesto break GermanEnigma machineencoding. The base-10 electromechanicalHarvard Mark Iwas completed in 1944, and was to some degree improved with inspiration from Charles Babbage's designs. In 1947, the first workingtransistor, thegermanium-basedpoint-contact transistor, was invented byJohn BardeenandWalter Houser Brattainwhile working underWilliam ShockleyatBell Labs.[8]This led the way to more advanceddigital computers. From the late 1940s, universities, military, and businesses developed computer systems to digitally replicate and automate previously manually performed mathematical calculations, with theLEObeing the first commercially available general-purpose computer. Digital communicationbecame economical for widespread adoption after the invention of the personal computer in the 1970s.Claude Shannon, aBell Labsmathematician, is credited for having laid out the foundations ofdigitalizationin his pioneering 1948 article,A Mathematical Theory of Communication.[9] In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Their concept, forms the basis of CMOS and DRAM technology today.[10]In 1957 at Bell Labs, Frosch and Derick were able to manufacture planar silicon dioxide transistors,[11]later a team at Bell Labs demonstrated a working MOSFET.[12]The first integrated circuit milestone was achieved byJack Kilbyin 1958.[13] Other important technological developments included the invention of the monolithicintegrated circuitchip byRobert NoyceatFairchild Semiconductorin 1959,[14]made possible by theplanar processdeveloped byJean Hoerni.[15]In 1963,complementary MOS(CMOS) was developed byChih-Tang SahandFrank WanlassatFairchild Semiconductor.[16]Theself-aligned gatetransistor, which further facilitated mass production, was invented in 1966 by Robert Bower atHughes Aircraft[17][18]and independently by Robert Kerwin,Donald Kleinand John Sarace at Bell Labs.[19] In 1962 AT&T deployed theT-carrierfor long-haulpulse-code modulation(PCM) digital voice transmission. The T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitated the synchronization and demultiplexing at the receiver. Over the subsequent decades the digitisation of voice became the norm for all but the last mile (where analogue continued to be the norm right into the late 1990s). Following the development ofMOS integrated circuitchips in the early 1960s, MOS chips reached highertransistor densityand lower manufacturing costs thanbipolarintegrated circuits by 1964. MOS chips further increased in complexity at a rate predicted byMoore's law, leading tolarge-scale integration(LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips tocomputingwas the basis for the firstmicroprocessors, as engineers began recognizing that a completecomputer processorcould be contained on a single MOS LSI chip.[20]In 1968, Fairchild engineerFederico Fagginimproved MOS technology with his development of thesilicon-gateMOS chip, which he later used to develop theIntel 4004, the first single-chip microprocessor.[21]It was released byIntelin 1971, and laid the foundations for themicrocomputer revolutionthat began in the 1970s. MOS technology also led to the development of semiconductorimage sensorssuitable fordigital cameras.[22]The first such image sensor was thecharge-coupled device, developed byWillard S. BoyleandGeorge E. Smithat Bell Labs in 1969,[23]based onMOS capacitortechnology.[22] The public was first introduced to the concepts that led to the Internet when a message was sent over theARPANETin 1969.Packet switchednetworks such as ARPANET,Mark I,CYCLADES,Merit Network,Tymnet, andTelenet, were developed in the late 1960s and early 1970s using a variety ofprotocols. The ARPANET in particular led to the development of protocols forinternetworking, in which multiple separate networks could be joined into a network of networks. TheWhole Earthmovement of the 1960s advocated the use of new technology.[24] In the 1970s, thehome computerwas introduced,[25]time-sharing computers,[26]thevideo game console, the first coin-op video games,[27][28]and thegolden age of arcade video gamesbegan withSpace Invaders. As digital technology proliferated, and the switch from analog to digital record keeping became the new standard in business, a relatively new job description was popularized, thedata entry clerk. Culled from the ranks of secretaries and typists from earlier decades, the data entry clerk's job was to convert analog data (customer records, invoices, etc.) into digital data. In developed nations, computers achieved semi-ubiquity during the 1980s as they made their way into schools, homes, business, and industry.Automated teller machines,industrial robots,CGIin film and television,electronic music,bulletin board systems, and video games all fueled what became the zeitgeist of the 1980s. Millions of people purchased home computers, making household names of early personal computer manufacturers such asApple, Commodore, and Tandy. To this day the Commodore 64 is often cited as the best selling computer of all time, having sold 17 million units (by some accounts)[29]between 1982 and 1994. In 1984, the U.S. Census Bureau began collecting data on computer and Internet use in the United States; their first survey showed that 8.2% of all U.S. households owned a personal computer in 1984, and that households with children under the age of 18 were nearly twice as likely to own one at 15.3% (middle and upper middle class households were the most likely to own one, at 22.9%).[30]By 1989, 15% of all U.S. households owned a computer, and nearly 30% of households with children under the age of 18 owned one.[31]By the late 1980s, many businesses were dependent on computers and digital technology. Motorola created the first mobile phone,Motorola DynaTac, in 1983. However, this device used analog communication – digital cell phones were not sold commercially until 1991 when the2Gnetwork started to be opened in Finland to accommodate the unexpected demand for cell phones that was becoming apparent in the late 1980s. Compute!magazine predicted thatCD-ROMwould be the centerpiece of the revolution, with multiple household devices reading the discs.[32] The first truedigital camerawas created in 1988, and the first were marketed in December 1989 in Japan and in 1990 in the United States.[33]By the early 2000s, digital cameras had eclipsed traditional film in popularity. Digital ink and paintwas also invented in the late 1980s. Disney's CAPS system (created 1988) was used for a scene in 1989'sThe Little Mermaidand for all their animation films between 1990'sThe Rescuers Down Underand 2004'sHome on the Range. Tim Berners-Leeinvented theWorld Wide Webin 1989.[34]The "Web 1.0 era" ended in 2005, coinciding with the development of further advanced technologies during the start of the 21st century.[35] The first public digitalHDTVbroadcast was of the1990 World Cupthat June; it was played in 10 theaters in Spain and Italy. However, HDTV did not become a standard until the mid-2000s outside Japan. TheWorld Wide Webbecame publicly accessible in 1991, which had been available only to government and universities.[36]In 1993Marc AndreessenandEric BinaintroducedMosaic, the first web browser capable of displaying inline images[37]and the basis for later browsers such as Netscape Navigator and Internet Explorer.Stanford Federal Credit Unionwas the firstfinancial institutionto offer online internet banking services to all of its members in October 1994.[38]In 1996OP Financial Group, also acooperative bank, became the second online bank in the world and the first in Europe.[39]The Internet expanded quickly, and by 1996, it was part ofmass cultureand many businesses listed websites in their ads.[citation needed]By 1999, almost every country had a connection, and nearly half ofAmericansand people in several other countries used the Internet on a regular basis.[citation needed]However throughout the 1990s, "getting online" entailed complicated configuration, anddial-upwas the only connection type affordable by individual users; the present day massInternet culturewas not possible. In 1989, about 15% of all households in the United States owned a personal computer.[40]For households with children, nearly 30% owned a computer in 1989, and in 2000, 65% owned one. Cell phonesbecame as ubiquitous as computers by the early 2000s, with movie theaters beginning to show ads telling people to silence their phones. They also becamemuch more advancedthan phones of the 1990s, most of which only took calls or at most allowed for the playing of simple games. Text messaging became widely used in the late 1990s worldwide, except for in the United States of America where text messaging didn't become commonplace till the early 2000s.[citation needed] The digital revolution became truly global in this time as well – after revolutionizing society in thedeveloped worldin the 1990s, the digital revolution spread to the masses in thedeveloping worldin the 2000s. By 2000, a majority of U.S. households had at least one personal computer andinternet accessthe following year.[41]In 2002, a majority of U.S. survey respondents reported having a mobile phone.[42] In late 2005 the population of the Internet reached 1 billion,[43]and 3 billion people worldwide used cell phones by the end of the decade.HDTVbecame the standard television broadcasting format in many countries by the end of the decade. In September and December 2006 respectively,Luxembourgand theNetherlandsbecame the first countries to completelytransition from analog to digital television. In September 2007, a majority of U.S. survey respondents reported havingbroadband internetat home.[44]According to estimates from theNielsen Media Research, approximately 45.7 million U.S. households in 2006 (or approximately 40 percent of approximately 114.4 million) owned a dedicatedhome video game console,[45][46]and by 2015, 51 percent of U.S. households owned a dedicated home video game console according to anEntertainment Software Associationannual industryreport.[47][48]By 2012, over 2 billion people used the Internet, twice the number using it in 2007.Cloud computinghad entered the mainstream by the early 2010s. In January 2013, a majority of U.S. survey respondents reported owning asmartphone.[49]By 2016, half of the world's population was connected[50]and as of 2020, that number has risen to 67%.[51] In the late 1980s, less than 1% of the world's technologically stored information was in digital format, while it was 94% in 2007, with more than 99% by 2014.[52] It is estimated that the world's capacity to store information has increased from 2.6 (optimally compressed)exabytesin 1986, to some 5,000exabytesin 2014 (5zettabytes).[52][53] Library expansion was calculated in 1945 byFremont Riderto double in capacity every 16 years where sufficient space made available.[61]He advocated replacing bulky, decaying printed works withminiaturizedmicroformanalog photographs, which could be duplicated on-demand for library patrons and other institutions. Rider did not foresee, however, thedigital technologythat would follow decades later to replaceanalogmicroform withdigital imaging,storage, andtransmission media, whereby vast increases in the rapidity of information growth would be made possible throughautomated, potentially-losslessdigital technologies. Accordingly,Moore's law, formulated around 1965, would calculate that thenumber of transistorsin a denseintegrated circuitdoubles approximately every two years.[62][63] By the early 1980s, along with improvements incomputing power, the proliferation of the smaller and less expensive personal computers allowed for immediateaccess to informationand the ability toshareandstoreit. Connectivity between computers within organizations enabled access to greater amounts of information.[citation needed] The world's technological capacity to store information grew from 2.6 (optimallycompressed)exabytes(EB) in 1986 to 15.8 EB in 1993; over 54.5 EB in 2000; and to 295 (optimally compressed) EB in 2007.[52][65]This is the informational equivalent to less than one 730-megabyte(MB)CD-ROMper person in 1986 (539 MB per person); roughly four CD-ROM per person in 1993; twelve CD-ROM per person in the year 2000; and almost sixty-one CD-ROM per person in 2007.[52]It is estimated that the world's capacity to store information has reached 5zettabytesin 2014,[53]the informational equivalent of 4,500 stacks of printed books from the earth to thesun.[citation needed] The amount ofdigital datastored appears to be growing approximatelyexponentially, reminiscent ofMoore's law. As such,Kryder's lawprescribes that the amount of storage space available appears to be growing approximately exponentially.[66][67][68][63] The world's technological capacity to receive information through one-waybroadcast networkswas 432exabytesof (optimallycompressed) information in 1986; 715 (optimally compressed) exabytes in 1993; 1.2 (optimally compressed)zettabytesin 2000; and 1.9 zettabytes in 2007, the information equivalent of 174 newspapers per person per day.[52] The world's effective capacity toexchange informationthroughtwo-wayTelecommunications networkswas 281petabytesof (optimally compressed) information in 1986; 471 petabytes in 1993; 2.2 (optimally compressed) exabytes in 2000; and 65 (optimally compressed) exabytes in 2007, the information equivalent of six newspapers per person per day.[52]In the 1990s, the spread of the Internet caused a sudden leap in access to and ability to share information in businesses and homes globally. A computer that cost $3000 in 1997 would cost $2000 two years later and $1000 the following year, due to the rapid advancement of technology.[citation needed] The world's technological capacity to compute information with human-guided general-purpose computers grew from 3.0 × 108MIPSin 1986, to 4.4 × 109MIPS in 1993; to 2.9 × 1011MIPS in 2000; to 6.4 × 1012MIPS in 2007.[52]An article featured in thejournalTrends in Ecology and Evolutionin 2016 reported that:[53] Digital technologyhas vastly exceeded thecognitivecapacityof any single human being and has done so a decade earlier than predicted. In terms of capacity, there are two measures of importance: the number of operations a system can perform and the amount of information that can be stored. The number ofsynaptic operations per secondin a human brain has been estimated to lie between 10^15 and 10^17. While this number is impressive, even in 2007 humanity'sgeneral-purpose computerswere capable of performing well over 10^18 instructions per second. Estimates suggest that the storage capacity of an individual human brain is about 10^12 bytes. On a per capita basis, this is matched by current digital storage (5x10^21 bytes per 7.2x10^9 people). Genetic code may also be considered part of theinformation revolution. Now that sequencing has been computerized,genomecan be rendered and manipulated as data. This started withDNA sequencing, invented byWalter GilbertandAllan Maxam[69]in 1976–1977 andFrederick Sangerin 1977, grew steadily with theHuman Genome Project, initially conceived by Gilbert and finally, the practical applications of sequencing, such asgene testing, after the discovery byMyriad Geneticsof theBRCA1breast cancer gene mutation. Sequence data inGenBankhas grown from the 606 genome sequences registered in December 1982 to the 231 million genomes in August 2021. An additional 13 trillion incomplete sequences are registered in theWhole Genome Shotgunsubmission database as of August 2021. The information contained in these registered sequences has doubled every 18 months.[70][original research?] During rare times in human history, there have been periods of innovation that have transformed human life. TheNeolithic Age, the Scientific Age and theIndustrial Ageall, ultimately, induced discontinuous and irreversible changes in the economic, social and cultural elements of the daily life of most people. Traditionally, these epochs have taken place over hundreds, or in the case of the Neolithic Revolution, thousands of years, whereas the Information Age swept to all parts of the globe in just a few years, as a result of the rapidly advancing speed of information exchange. Between 7,000 and 10,000 years ago during the Neolithic period, humans began to domesticate animals, began to farm grains and to replace stone tools with ones made of metal. These innovations allowed nomadic hunter-gatherers to settle down. Villages formed along theYangtze Riverin China in 6,500 B.C., theNile Riverregion of Africa and inMesopotamia(Iraq) in 6,000 B.C. Cities emerged between 6,000 B.C. and 3,500 B.C. The development of written communication (cuneiforminSumeriaandhieroglyphsinEgyptin 3,500 B.C. and writing in Egypt in 2,560 B.C. and inMinoaand China around 1,450 B.C.) enabled ideas to be preserved for extended periods to spread extensively. In all, Neolithic developments, augmented by writing as an information tool, laid the groundwork for the advent of civilization. The Scientific Age began in the period betweenGalileo's 1543 proof that the planets orbit the Sun andNewton's publication of the laws of motion and gravity inPrincipiain 1697. This age of discovery continued through the 18th century, accelerated by widespread use of themoveable type printing pressbyJohannes Gutenberg. The Industrial Age began in Great Britain in 1760 and continued into the mid-19th century. The invention of machines such as the mechanical textile weaver by Edmund Cartwrite, the rotating shaftsteam enginebyJames Wattand thecotton ginbyEli Whitney, along with processes for mass manufacturing, came to serve the needs of a growing global population. The Industrial Age harnessed steam and waterpower to reduce the dependence on animal and human physical labor as the primary means of production. Thus, the core of the Industrial Revolution was the generation and distribution of energy from coal and water to produce steam and, later in the 20th century, electricity. The Information Age also requires electricity to power theglobal networksof computers that process and store data. However, what dramatically accelerated the pace of The Information Age's adoption, as compared to previous ones, was the speed by which knowledge could be transferred and pervaded the entire human family in a few short decades. This acceleration came about with the adoptions of a new form of power. Beginning in 1972, engineers devised ways to harness light to convey data throughfiber optic cable.Today, light-basedoptical networkingsystems at the heart of telecom networks and the Internet span the globe and carry most of the information traffic to and from users and data storage systems. There are different conceptualizations of the Information Age. Some focus on the evolution of information over the ages, distinguishing between the Primary Information Age and the Secondary Information Age. Information in the Primary Information Age was handled by newspapers, radio and television. The Secondary Information Age was developed by the Internet, satellite televisions andmobile phones. The Tertiary Information Age was emerged by media of the Primary Information Age interconnected with media of the Secondary Information Age as presently experienced.[71][72][73][74][75][76] Others classify it in terms of the well-establishedSchumpeterianlong wavesorKondratiev waves. Here authors distinguish three different long-term metaparadigms, each with different long waves. The first focused on the transformation of material, includingstone,bronze, andiron. The second, often referred to asIndustrial Revolution, was dedicated to the transformation of energy, includingwater,steam,electric, andcombustion power. Finally, the most recent metaparadigm aims at transforming information. It started out with the proliferation of communication andstored dataand has now entered the age ofalgorithms, which aims at creating automated processes to convert the existing information into actionable knowledge.[77] The main feature of the information revolution is the growing economic, social and technological role of information.[78]Information-related activities did not come up with the Information Revolution. They existed, in one form or the other, in all human societies, and eventually developed into institutions, such as thePlatonic Academy,Aristotle's Peripatetic school in theLyceum, theMusaeumand theLibrary of Alexandria, or the schools ofBabylonian astronomy. TheAgricultural Revolutionand theIndustrial Revolutioncame up when new informational inputs were produced by individual innovators, or by scientific and technical institutions. During the Information Revolution all these activities are experiencing continuous growth, while other information-oriented activities are emerging. Information is the central theme of several new sciences, which emerged in the 1940s, includingShannon's (1949)Information Theory[79]andWiener's (1948)Cybernetics. Wiener stated: "information is information not matter or energy". This aphorism suggests that information should be considered along withmatterand energy as the third constituent part of the Universe; information is carried by matter or by energy.[80]By the 1990s some writers believed that changes implied by the Information revolution will lead to not only a fiscal crisis for governments but also the disintegration of all "large structures".[81] The terminformation revolutionmay relate to, or contrast with, such widely used terms asIndustrial RevolutionandAgricultural Revolution. Note, however, that you may prefer mentalist to materialist paradigm. The following fundamental aspects of the theory of information revolution can be given:[82][83] From a different perspective,Irving E. Fang(1997) identified six 'Information Revolutions': writing, printing, mass media, entertainment, the 'tool shed' (which we call 'home' now), and the information highway. In this work the term 'information revolution' is used in a narrow sense, to describe trends in communication media.[87] Porat (1976) measured the information sector in the US using theinput-output analysis;OECDhas included statistics on the information sector in the economic reports of its member countries.[88]Veneris (1984, 1990) explored the theoretical, economic and regional aspects of the informational revolution and developed asystems dynamicssimulationcomputer model.[82][83] These works can be seen as following the path originated with the work ofFritz Machlupwho in his (1962) book "The Production and Distribution of Knowledge in the United States", claimed that the "knowledge industry represented 29% of the US gross national product", which he saw as evidence that the Information Age had begun. He defines knowledge as a commodity and attempts to measure the magnitude of the production and distribution of this commodity within a modern economy. Machlup divided information use into three classes: instrumental, intellectual, and pastime knowledge. He identified also five types of knowledge: practical knowledge; intellectual knowledge, that is, general culture and the satisfying of intellectual curiosity; pastime knowledge, that is, knowledge satisfying non-intellectual curiosity or the desire for light entertainment and emotional stimulation; spiritual or religious knowledge; unwanted knowledge, accidentally acquired and aimlessly retained.[89] More recent estimates have reached the following results:[52] Eventually,Information and communication technology(ICT)—i.e. computers,computerized machinery,fiber optics,communication satellites, the Internet, and other ICT tools—became a significant part of theworld economy, as the development ofoptical networkingandmicrocomputersgreatly changed many businesses and industries.[91][92]Nicholas Negropontecaptured the essence of these changes in his 1995 book,Being Digital,in which he discusses the similarities and differences between products made ofatomsand products made ofbits.[93] The Information Age has affected theworkforcein several ways, such as compelling workers to compete in a globaljob market. One of the most evident concerns is the replacement of human labor by computers that can do their jobs faster and more effectively, thus creating a situation in which individuals who perform tasks that can easily beautomatedare forced to find employment where their labor is not as disposable.[94]This especially creates issue for those inindustrial cities, where solutions typically involve loweringworking time, which is often highly resisted. Thus, individuals who lose their jobs may be pressed to move up into more indispensable professions (e.g. engineers,doctors, lawyers,teachers,professors, scientists,executives, journalists, consultants), who are able to compete successfully in theworld marketand receive (relatively) high wages.[citation needed] Along with automation, jobs traditionally associated with the middle class (e.g.assembly line,data processing, management, andsupervision) have also begun to disappear as result of outsourcing.[95]Unable to compete with those indeveloping countries,productionand service workers inpost-industrial (i.e. developed) societieseither lose their jobs through outsourcing, accept wage cuts, or settle forlow-skill,low-wageservice jobs.[95]In the past, the economic fate of individuals would be tied to that of their nation's. For example, workers in the United States were once well paid in comparison to those in other countries. With the advent of the Information Age and improvements in communication, this is no longer the case, as workers must now compete in a globaljob market, whereby wages are less dependent on the success or failure of individual economies.[95] In effectuating aglobalized workforce, the internet has just as well allowed for increased opportunity indeveloping countries, making it possible for workers in such places to provide in-person services, therefore competing directly with their counterparts in other nations. Thiscompetitive advantagetranslates into increased opportunities and higher wages.[96] The Information Age has affected the workforce in thatautomationand computerization have resulted in higherproductivitycoupled with netjob lossin manufacturing. In the United States, for example, from January 1972 to August 2010, the number of people employed in manufacturing jobs fell from 17,500,000 to 11,500,000 while manufacturing value rose 270%.[97]Although it initially appeared thatjob lossin theindustrial sectormight be partially offset by the rapid growth of jobs in information technology, therecession of March 2001foreshadowed a sharp drop in the number of jobs in the sector. This pattern of decrease in jobs would continue until 2003,[98]and data has shown that, overall, technology creates more jobs than it destroys even in the short run.[99] Industry has become more information-intensive while lesslabor- andcapital-intensive. This has left important implications for theworkforce, as workers have become increasinglyproductiveas the value of their labor decreases. For the system ofcapitalismitself, the value of labor decreases, the value ofcapitalincreases. In theclassical model, investments inhumanandfinancial capitalare important predictors of the performance of a newventure.[100]However, as demonstrated byMark Zuckerbergand Facebook, it now seems possible for a group of relatively inexperienced people with limited capital to succeed on a large scale.[101] The Information Age was enabled by technology developed in theDigital Revolution, which was itself enabled by building on the developments of theTechnological Revolution. The onset of the Information Age can be associated with the development oftransistortechnology.[2]The concept of afield-effect transistorwas first theorized byJulius Edgar Lilienfeldin 1925.[102]The first practical transistor was thepoint-contact transistor, invented by the engineersWalter Houser BrattainandJohn Bardeenwhile working forWilliam ShockleyatBell Labsin 1947. This was a breakthrough that laid the foundations for modern technology.[2]Shockley's research team also invented thebipolar junction transistorin 1952.[103][102]The most widely used type of transistor is themetal–oxide–semiconductor field-effect transistor(MOSFET), invented byMohamed M. AtallaandDawon Kahngat Bell Labs in 1960.[104]Thecomplementary MOS(CMOS) fabrication process was developed byFrank WanlassandChih-Tang Sahin 1963.[105] Before the advent ofelectronics,mechanical computers, like theAnalytical Enginein 1837, were designed to provide routine mathematical calculation and simple decision-making capabilities. Military needs duringWorld War IIdrove development of the first electronic computers, based onvacuum tubes, including theZ3, theAtanasoff–Berry Computer,Colossus computer, andENIAC. The invention of the transistor enabled the era ofmainframe computers(1950s–1970s), typified by theIBM 360. These large,room-sized computersprovided data calculation andmanipulationthat was much faster than humanly possible, but were expensive to buy and maintain, so were initially limited to a few scientific institutions, large corporations, and government agencies. Thegermaniumintegrated circuit(IC) was invented byJack KilbyatTexas Instrumentsin 1958.[106]Thesiliconintegrated circuit was then invented in 1959 byRobert NoyceatFairchild Semiconductor, using theplanar processdeveloped byJean Hoerni, who was in turn building onMohamed Atalla's siliconsurface passivationmethod developed atBell Labsin 1957.[107][108]Following the invention of theMOS transistorby Mohamed Atalla andDawon Kahngat Bell Labs in 1959,[104]theMOSintegrated circuit was developed by Fred Heiman and Steven Hofstein atRCAin 1962.[109]Thesilicon-gateMOS IC was later developed byFederico Fagginat Fairchild Semiconductor in 1968.[110]With the advent of the MOS transistor and the MOS IC, transistor technologyrapidly improved, and the ratio of computing power to size increased dramatically, giving direct access to computers to ever smaller groups of people. The first commercial single-chip microprocessor launched in 1971, theIntel 4004, which was developed by Federico Faggin using his silicon-gate MOS IC technology, along withMarcian Hoff,Masatoshi ShimaandStan Mazor.[111][112] Along with electronicarcade machinesandhome video game consolespioneered byNolan Bushnellin the 1970s, the development of personal computers like theCommodore PETandApple II(both in 1977) gave individuals access to computers. However,data sharingbetween individual computers was either non-existent or largelymanual, at first usingpunched cardsandmagnetic tape, and laterfloppy disks. The first developments for storing data were initially based on photographs, starting withmicrophotographyin 1851 and thenmicroformin the 1920s, with the ability to store documents on film, making them much more compact. Earlyinformation theoryandHamming codeswere developed about 1950, but awaited technical innovations in data transmission and storage to be put to full use. Magnetic-core memorywas developed from the research of Frederick W. Viehe in 1947 andAn WangatHarvard Universityin 1949.[113][114]With the advent of the MOS transistor, MOSsemiconductor memorywas developed by John Schmidt atFairchild Semiconductorin 1964.[115][116]In 1967,Dawon KahngandSimon Szeat Bell Labs described in 1967 how the floating gate of an MOS semiconductor device could be used for the cell of a reprogrammable ROM.[117]Following the invention of flash memory byFujio MasuokaatToshibain 1980,[118][119]Toshiba commercializedNAND flashmemory in 1987.[120][117] Copper wire cables transmitting digital data connectedcomputer terminalsandperipheralsto mainframes, and special message-sharing systems leading to email, were first developed in the 1960s. Independent computer-to-computer networking began withARPANETin 1969. This expanded to become the Internet (coined in 1974). Access to the Internet improved with the invention of theWorld Wide Webin 1991. The capacity expansion fromdense wave division multiplexing,optical amplificationandoptical networkingin the mid-1990s led to record data transfer rates. By 2018, optical networks routinely delivered 30.4 terabits/s over a fiber optic pair, the data equivalent of 1.2 million simultaneous 4K HD video streams.[121] MOSFET scaling, the rapid miniaturization of MOSFETs at a rate predicted byMoore's law,[122]led to computers becoming smaller and more powerful, to the point where they could be carried. During the 1980s–1990s, laptops were developed as a form of portable computer, andpersonal digital assistants(PDAs) could be used while standing or walking.Pagers, widely used by the 1980s, were largely replaced by mobile phones beginning in the late 1990s, providingmobile networkingfeatures to some computers. Now commonplace, this technology is extended todigital camerasand other wearable devices. Starting in the late 1990s,tabletsand thensmartphonescombined and extended these abilities of computing, mobility, and information sharing.Metal–oxide–semiconductor(MOS)image sensors, which first began appearing in the late 1960s, led to the transition from analog todigital imaging, and from analog to digital cameras, during the 1980s–1990s. The most common image sensors are thecharge-coupled device(CCD) sensor and theCMOS(complementary MOS)active-pixel sensor(CMOS sensor). Electronic paper, which has origins in the 1970s, allows digital information to appear as paper documents. By 1976, there were several firms racing to introduce the first truly successful commercial personal computers. Three machines, theApple II,Commodore PET 2001andTRS-80were all released in 1977,[123]becoming the most popular by late 1978.[124]Bytemagazine later referred to Commodore, Apple, and Tandy as the "1977 Trinity".[125]Also in 1977,Sord Computer Corporationreleased the Sord M200 Smart Home Computer in Japan.[126] Steve Wozniak(known as "Woz"), a regular visitor toHomebrew Computer Clubmeetings, designed the single-boardApple Icomputer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$500 each fromthe Byte Shop, Woz and his friendSteve JobsfoundedApple Computer. About 200 of the machines sold before the company announced the Apple II as a complete computer. It had colorgraphics, a full QWERTY keyboard, and internal slots for expansion, which were mounted in a high quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple IIoperating systemwas only the built-in BASIC interpreter contained in ROM.Apple DOSwas added to support the diskette drive; the last version was "Apple DOS 3.3". Its higher price and lack offloating pointBASIC, along with a lack of retail distribution sites, caused it to lag in sales behind the other Trinity machines until 1979, when it surpassed the PET. It was again pushed into 4th place whenAtari, Inc.introduced itsAtari 8-bit computers.[127] Despite slow initial sales, the lifetime of theApple IIwas about eight years longer than other machines, and so accumulated the highest total sales. By 1985, 2.1 million had sold and more than 4 million Apple II's were shipped by the end of its production in 1993.[128] Optical communicationplays a crucial role incommunication networks. Optical communication provides the transmission backbone for thetelecommunicationsandcomputer networksthat underlie the Internet, the foundation for theDigital Revolutionand Information Age. The two core technologies are the optical fiber and light amplification (theoptical amplifier). In 1953, Bram van Heel demonstrated image transmission through bundles ofoptical fiberswith a transparent cladding. The same year,Harold HopkinsandNarinder Singh KapanyatImperial Collegesucceeded in making image-transmitting bundles with over 10,000 optical fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers. Gordon Gouldinvented theoptical amplifierand thelaser, and also established the first optical telecommunications company,Optelecom, to design communication systems. The firm was a co-founder inCiena Corp., the venture that popularized the optical amplifier with the introduction of the firstdense wave division multiplexingsystem.[129]This massive scale communication technology has emerged as the common basis of all telecommunications networks.[130][failed verification]and, thus, a foundation of the Information Age.[131][132] Manuel Castells authoredThe Information Age: Economy, Society and Culture. He writes of our global interdependence and the new relationships between economy, state and society, what he calls "a new society-in-the-making." He writes: "It is in fact, quite the opposite: history is just beginning, if by history we understand the moment when, after millennia of a prehistoric battle with Nature, first to survive, then to conquer it, our species has reached the level of knowledge and social organization that will allow us to live in a predominantly social world. It is the beginning of a new existence, and indeed the beginning of a new age, The Information Age, marked by the autonomy of culture vis-à-vis the material basis of our existence."[133] Thomas Chatterton Williamswrote about the dangers ofanti-intellectualismin the Information Age in a piece forThe Atlantic. Although access to information has never been greater, most information is irrelevant or insubstantial. The Information Age's emphasis on speed over expertise contributes to "superficial culture in which even the elite will openly disparage as pointless our main repositories for the very best that has been thought."[134]
https://en.wikipedia.org/wiki/Information_Age
Athought disorder(TD) is a disturbance incognitionwhich affectslanguage, thought andcommunication.[1][2]Psychiatric and psychological glossaries in 2015 and 2017 identified thought disorders as encompassingpoverty of ideas,paralogia(a reasoning disorder characterized by expression of illogical or delusional thoughts),word salad, anddelusions—all disturbances of thought content and form. Two specific terms have been suggested—content thought disorder (CTD) and formal thought disorder (FTD). CTD has been defined as a thought disturbance characterized by multiple fragmented delusions, and the termthought disorderis often used to refer to an FTD:[3]a disruption of the form (or structure) of thought.[4]Also known as disorganized thinking, FTD results in disorganized speech and is recognized as a major feature ofschizophreniaand otherpsychoses[5][6](includingmood disorders,dementia,mania, andneurological diseases).[7][5][8]Disorganized speech leads to an inference of disorganized thought.[9]Thought disorders includederailment,[10]pressured speech,poverty of speech,tangentiality,verbigeration, andthought blocking.[8]One of the first known public presentations of thought disorders, or specifically OCD as it is known today, was in 1691. Bishop John Moore gave a speech before Queen Mary II, about "religious melancholy."[11] Formal thought disorder affects the form (rather than the content) of thought.[12]Unlike hallucinations and delusions, it is an observable, objective sign of psychosis.[12]FTD is a common core symptom of a psychotic disorder, and may be seen as a marker of severity and as an indicator of prognosis.[8][13]It reflects a cluster of cognitive, linguistic, and affective disturbances that have generated research interest in the fields of cognitive neuroscience, neurolinguistics, and psychiatry.[8] Eugen Bleuler, who namedschizophrenia, said that TD was its defining characteristic.[14]Disturbances of thinking and speech, such asclangingorecholalia, may also be present inTourette syndrome;[15]other symptoms may be found indelirium.[16]A clinical difference exists between these two groups. Patients with psychoses are less likely to show awareness or concern about disordered thinking, and those with other disorders are aware and concerned about not being able to think clearly.[17] Thought content is the subject of an individual's thoughts, or the types of ideas expressed by the individual.[18]Mental health professionals define normal thought content as the absence of significant abnormalities, distortions, or harmful thoughts.[19]Normal thought content aligns with reality, is appropriate to the situation, and does not cause significant distress or impair functioning.[19] A person's cultural background must be considered when assessing thought content. Abnormalities in thought content differ across cultures.[20]Specific types of abnormal thought content can be features of different psychiatric illnesses.[21] Examples of disordered thought content include: Thought process is a person's form, flow, and coherence of thinking.[23]This is how they use language and put ideas together. A normal thought process is logical, linear, meaningful, and goal-directed.[18]A logical, linear thought process is one that demonstrates rational connections between thoughts in a way that is sequential that allows others to understand.[18][23]Thought process is not what a person thinks, rather it is how a person expresses their thoughts.[25] Formal thought disorder (FTD), also known as disorganized speech or disorganized thinking, is a disorder of a person's thought process in which they are unable to express their thoughts in a logical and linear fashion.[26]To be considered FTD, disorganized thinking must be severe enough that it impairs effective communication.[27]Disorganized speech is a core symptom ofpsychosis, and therefore can be a feature of any condition that has a potential to cause psychosis, includingschizophrenia,mania,major depressive disorder,delirium,postpartum psychosis, major neurocognitive disorder, andsubstance induced psychosis.[18]FTD reflects a cluster of cognitive, linguistic, and affective disturbances, and has generated research interest from the fields ofcognitive neuroscience,neurolinguistics, andpsychiatry.[8] It can be subdivided into clusters of positive and negative symptoms and objective (rather than subjective) symptoms.[13]Onthe scale of positive and negative symptoms, they have been grouped intopositive formal thought disorder(posFTD) andnegative formal thought disorder(negFTD).[13][12]Positive subtypes werepressure of speech,tangentiality,derailment, incoherence, andillogicality;[13]negative subtypes werepoverty of speechand poverty of content.[12][13]The two groups were posited to be at either end of a spectrum of normal speech, but later studies showed them to be poorly correlated.[12]A comprehensive measure of FTD is the Thought and Language Disorder (TALD) Scale.[28]The Kiddie Formal Thought Disorder Rating Scale (K-FTDS) can be used to assess the presence of formal thought disorder in children and their childhood.[29]Although it is very extensive and time-consuming, its results are in great detail and reliable.[30] Nancy Andreasenpreferred to identify TDs asthought-language-communication disorders(TLC disorders).[31][32]Up to seven domains of FTD have been described on the Thought, Language, Communication (TLC) Scale, with most of the variance accounted for by two or three domains.[12]Some TLC disorders are more suggestive of severe disorder, and are listed with the first 11 items.[32] TheDSM-5categorizes FTD as "a psychotic symptom, manifested as bizarre speech and communication." FTD may include incoherence, peculiar words, disconnected ideas, or a lack of unprompted content expected from normal speech.[33]Clinical psychologiststypically assess FTD by initiating an exploratory conversation with patients and observing the patient's verbal responses.[34] FTD is often used to establish a diagnosis ofschizophrenia; incross-sectional studies, 27 to 80 percent of patients with schizophrenia present with FTD. A hallmark feature of schizophrenia, it is also widespread amongst otherpsychiatric disorders; up to 60 percent of those withschizoaffective disorderand 53 percent of those withclinical depressiondemonstrate FTD, suggesting that it is not exclusive to schizophrenia. About six percent of healthy subjects exhibit a mild form of FTD.[35]TheDSM-5-TRmentions that less severe FTD may happen during the initial (prodromal) and residual periods of schizophrenia.[27] The characteristics of FTD vary amongst disorders. A number of studies indicate that FTD inmaniais marked by irrelevant intrusions and pronounced combinatory thinking, usually with a playfulness and flippancy absent from patients with schizophrenia.[36][37][38]The FTD present in patients with schizophrenia was characterized by disorganization,neologism, and fluid thinking, and confusion with word-finding difficulty.[38] There is limited data on thelongitudinalcourse of FTD.[39]The most comprehensive longitudinal study of FTD by 2023 found a distinction in the longitudinal course of thought-disorder symptoms between schizophrenia and other psychotic disorders. The study also found an association between pre-index assessments[clarification needed]of social, work and educational functioning and the longitudinal course of FTD.[40] Several theories have been developed to explain the causes of formal thought disorder. It has been proposed that FTD relates toneurocognitionviasemantic memory.[41]Semantic networkimpairment in people with schizophrenia—measured by the difference between fluency (e.g. the number of animals' names produced in 60 seconds) and phonological fluency (e.g. the number of words beginning with "F" produced in 60 seconds)—predicts the severity of formal thought disorder, suggesting that verbal information (throughsemantic priming) is unavailable.[41]Otherhypothesesincludeworking memorydeficit (being confused about what has already been said in a conversation) and attentional focus.[41] FTD in schizophrenia has been found to be associated with structural and functional abnormalities in the language network, where structural studies have found bilateralgrey matterdeficits; deficits in the bilateralinferior frontal gyrus, bilateralinferior parietal lobuleand bilateralsuperior temporal gyrusare FTDcorrelates.[35]Other studies did not find an association between FTD and structural aberrations of the language network, however, and regions not included in the language network have been associated with FTD.[35]Future research is needed to clarify whether there is an association with FTD in schizophrenia and neural abnormalities in the language network.[35] Transmitter systemswhich might cause FTD have also been investigated. Studies have found thatglutamatedysfunction, due to ararefactionof glutamatergicsynapsesin the superior temporal gyrus in patients with schizophrenia, is a major cause of positive FTD.[35] The heritability of FTD has been demonstrated in a number of family and twin studies.Imaging geneticsstudies, using a semantic verbal-fluency task performed by the participants duringfunctional MRIscanning, revealed thatalleleslinked to glutamatergic transmission contribute to functional aberrations in typical language-related brain areas.[35]FTD is not solelygenetically determined, however; environmental influences, such as allusive thinking in parents during childhood, and environmental risk factors for schizophrenia (including childhood abuse, migration, social isolation, andcannabisuse) also contribute to the pathophysiology of FTD.[42] The origins of FTD have been theorised from asocial-learningperspective. Singer and Wynne said that familial communication patterns play a key role in shaping the development of FTD; dysfunctional social interactions undermine a child's development of cohesive, stable mental representations of the world, increasing their risk of developing FTD.[43] Antipsychoticmedication is often used to treat FTD. Although the vast majority of studies of the efficacy of antipsychotic treatment do not report effects on syndromes or symptoms, six older studies report the effects of antipsychotic treatment on FTD.[44][45][46][47][48][49]These studies and clinical experience indicate that antipsychotics are often an effective treatment for patients with positive or negative FTD, but not all patients respond to them. Cognitive behavioural therapy(CBT) is another treatment for FTD, but its effectiveness has not been well-studied.[35]Largerandomised controlled trialsevaluating the effectiveness of CBT for treating psychosis often exclude individuals with severe FTD because it reduces thetherapeutic alliancerequired by the therapy.[50]However, provisional evidence suggests that FTD may not preclude the effectiveness of CBT.[50]Kircher and colleagues have suggested that the following methods should be used in CBT for patients with FTD:[35] Language abnormalities exist in the general population, and do not necessarily indicate a condition.[51]They can occur in schizophrenia and other disorders (such as mania or depression), or in anyone who may be tired or stressed.[1][52]To distinguish thought disorder, patterns of speech, severity of symptoms, their frequency, and any resulting functional impairment can be considered.[32] Symptoms of FTD includederailment,[10]pressured speech,poverty of speech,tangentiality, andthought blocking.[8]The most common forms of FTD observed are tangentiality and circumstantiality.[53]FTD is a hallmark feature ofschizophrenia, but is also associated with other conditions that can cause psychosis (includingmood disorders,dementia,mania, andneurological diseases).[4][7][52]Impairedattention, poormemory, and difficulty formulating abstractconceptsmay also reflect TD, and can be observed and assessed with mental-status tests such asserial sevensor memory tests.[54] Thirty symptoms (or features) of TD have been described, including:[55][12] Psychiatric and psychologicalglossariesin 2015 and 2017 definedthought disorder'as disturbed thinking orcognitionwhich affectscommunication,language, or thought content including poverty of ideas,neologisms, paralogia,word salad, anddelusions[7][87](disturbances of thought content and form), and suggested the more-specific terms content thought disorder (CTD) and formal thought disorder (FTD).[2]CTD was defined as a TD characterized by multiple fragmented delusions,[88][87]and FTD was defined as a disturbance in the form or structure of thinking.[89][90]The 2013DSM-5only used the term FTD, primarily as asynonymfor disorganized thinking and speech.[91]This contrasts with the 1992ICD-10(which only used the word "thought disorder", always accompanied with "delusion" and "hallucination")[92]and a 2002medical dictionarywhich generally defined thought disorders similarly to the psychiatric glossaries[93]and used the word in other entries as the ICD-10 did.[94] A 2017 psychiatric text describing thought disorder as a "disorganization syndrome" in the context of schizophrenia: "Thought disorder" here refers to disorganization of the form of thought and not content. An older use of the term "thought disorder" included the phenomena of delusions and sometimes hallucinations, but this is confusing and ignores the clear differences in the relationships between symptoms that have become apparent over the past 30 years. Delusions and hallucinations should be identified as psychotic symptoms, and thought disorder should be taken to mean formal thought disorders or a disorder of verbal cognition. The text said that somecliniciansuse the term "formal thought disorder" broadly, referring to abnormalities in thought form with psychotic cognitive signs or symptoms,[95]and studies of cognition and subsyndromes in schizophrenia may refer to FTD asconceptual disorganizationordisorganization factor.[82] Some disagree: Unfortunately, "thought disorder" is often involved rather loosely to refer to both FTD and delusional content. For the sake of clarity, the unqualified use of the phrase "thought disorder" should be discarded from psychiatric communication. Even the designation "formal thought disorder" covers too wide a territory. It should always be made clear whether one is referring to derailment or loose associations, flight of ideas, or circumstantiality. It was believed that TD occurred only in schizophrenia, but later findings indicate that it may occur in other psychiatric conditions (including mania) and in people without mental illness.[97]Not all people with schizophrenia have a TD; the condition is notspecificto the disease.[98] When defining thought-disorder subtypes and classifying them as positive or negative symptoms,Nancy Andreasenfound[98]that different subtypes of TD occur at different frequencies in those with mania, depression, and schizophrenia. People with mania have pressured speech as the most prominent symptom, and have rates of derailment, tangentiality, and incoherence as prominent as in those with schizophrenia. They are likelier to have pressured speech, distractibility, and circumstantiality.[98][99] People with schizophrenia have more negative TD, including poverty of speech and poverty of content of speech, but also have relatively high rates of some positive TD.[98]Derailment, loss of goal, poverty of content of speech, tangentiality and illogicality are particularly characteristic of schizophrenia.[100]People with depression have relatively-fewer TDs; the most prominent are poverty of speech, poverty of content of speech, and circumstantiality. Andreasen noted the diagnostic usefulness of dividing the symptoms into subtypes; negative TDs without full affective symptoms suggest schizophrenia.[98][99] She also cited the prognostic value of negative-positive-symptom divisions. In manic patients, most TDs resolve six months after evaluation; this suggests that TDs in mania, although as severe as in schizophrenia, tend to improve.[98]In people with schizophrenia, however, negative TDs remain after six months and sometimes worsen; positive TDs somewhat improve. A negative TD is a good predictor of some outcomes; patients with prominent negative TDs are worse in social functioning six months later.[98]More prominent negative symptoms generally suggest a worse outcome; however, some people may do well, respond to medication, and have normal brain function. Positive symptoms vary similarly.[101] A prominent TD at illness onset suggests a worse prognosis, including:[82] TD which is unresponsive to treatment predicts a worse illness course.[82]In schizophrenia, TD severity tends to be more stable than hallucinations and delusions. Prominent TDs are more unlikely to diminish in middle age, compared with positive symptoms.[82]Less-severe TD may occur during theprodromaland residual periods of schizophrenia.[102]Treatment for thought disorder may include psychotherapy, such as cognitive behavior therapy (CBT), and psychotropic medications.[103] TheDSM-5includes delusions, hallucinations, disorganized thought process (formal thought disorder), and disorganized or abnormal motor behavior (includingcatatonia) as key symptoms of psychosis.[6]Schizophrenia-spectrum disorders such as schizoaffective disorder and schizophreniform disorder typically consist of prominent hallucinations, delusions and FTD; the latter presents as severely disorganized, bizarre, and catatonic behavior.[4][6]Psychotic disorders due to medical conditions and substance use typically consist of delusions and hallucinations.[6][104]The rarer delusional disorder and shared psychotic disorder typically present with persistent delusions.[104]FTDs are commonly found in schizophrenia and mood disorders, with poverty of speech content more common in schizophrenia.[105] Psychoses such as schizophrenia andbipolar maniaare distinguishable frommalingering, when an individual fakes illness for other gains, by clinical presentations; malingerers feign thought content with no irregularities in form such as derailment or looseness of association.[106]Negative symptoms, including alogia, may be absent, and chronic thought disorder is typically distressing.[106] Autism spectrum disorders (ASD) whose diagnosis requires the onset of symptoms before three years of age can be distinguished from early-onset schizophrenia; schizophrenia under age 10 is extremely rare, and ASD patients do not display FTDs.[107]However, it has been suggested that individuals with ASD display language disturbances like those found in schizophrenia; a 2008 study found that children and adolescents with ASD showed significantly more illogical thinking and loose associations than control subjects.[108]The illogical thinking was related to cognitive functioning and executive control; the loose associations were related to communication symptoms and parent reports of stress and anxiety.[108] Rorschach testshave been useful for assessing TD in disturbed patients.[109][1]A series of inkblots are shown, and patient responses are analyzed to determine disturbances of thought.[1]The nature of the assessment offers insight into the cognitive processes of another, and how they respond to equivocal stimuli.[110]Hermann Rorschachdeveloped this test to diagnose schizophrenia after realizing that people with schizophrenia gave drastically different interpretations ofKlecksographieinkblots from others whose thought processes were considered normal,[111]and it has become one of the most widely used assessment tools for diagnosing TDs.[1] The Thought Disorder Index (TDI), also known as the Delta Index, was developed to help further determine the severity of TD in verbal responses.[1]TDI scores are primarily derived from verbally-expressed interpretations of the Rorschach test, but TDI can also be used with other verbal samples (including theWechsler Adult Intelligence Scale).[1]TDI has a twenty-three-category scoring index; each category scores the level of severity on a scale from 0 to 1, with .25 being mild and 1.00 being most severe (0.25, 0.50, 0.75, 1.00).[1] TD has been criticized as being based on circular or incoherent definitions.[112][need quotation to verify]Symptoms of TD are inferred from disordered speech, based on the assumption that disordered speech arises from disordered thought. Although TD is typically associated with psychosis, similar phenomena can appear in different disorders and leading to misdiagnosis.[113] A criticism related to the separation of symptoms of schizophrenia into negative or positive symptoms, including TD, is that it oversimplifies the complexity of TD and its relationship to other positive symptoms.[114]Factor analysishas found that negative symptoms tend to correlate with one another, but positive symptoms tend to separate into two groups.[114]The three clusters became known as negative symptoms, psychotic symptoms, and disorganization symptoms.[101]Alogia, a TD traditionally classified as a negative symptom, can be separated into two types: poverty of speech content (a disorganization symptom) and poverty of speech, response latency, and thought blocking (negative symptoms).[115]Positive-negative-symptom diametrics, however, may enable a more accurate characterization of schizophrenia.[116]
https://en.wikipedia.org/wiki/Formal_thought_disorder
Incomputational complexity theory, aninteractive proof systemis anabstract machinethat modelscomputationas the exchange of messages between two parties: aproverand averifier. The parties interact by exchanging messages in order to ascertain whether a givenstringbelongs to alanguageor not. The prover is assumed to possess unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct. All interactive proof systems have two requirements: The specific nature of the system, and so thecomplexity classof languages it can recognize, depends on what sort of bounds are put on the verifier, as well as what abilities it is given—for example, most interactive proof systems depend critically on the verifier's ability to make random choices. It also depends on the nature of the messages exchanged—how many and what they can contain. Interactive proof systems have been found to have some important implications for traditional complexity classes defined using only one machine. The main complexity classes describing interactive proof systems areAMandIP. Every interactive proof system defines aformal languageof stringsL{\displaystyle L}.Soundnessof the proof system refers to the property that no prover can make the verifier accept for the wrong statementy∉L{\displaystyle y\not \in L}except with some small probability. The upper bound of this probability is referred to as thesoundness errorof a proof system. More formally, for every prover(P~){\displaystyle ({\tilde {\mathcal {P}}})}, and everyy∉L{\displaystyle y\not \in L}: for someϵ≪1{\displaystyle \epsilon \ll 1}. As long as the soundness error is bounded by a polynomial fraction of the potential running time of the verifier (i.e.ϵ≤1/poly(|y|){\displaystyle \epsilon \leq 1/\mathrm {poly} (|y|)}), it is always possible to amplify soundness until the soundness error becomesnegligible functionrelative to the running time of the verifier. This is achieved by repeating the proof and accepting only if all proofs verify. Afterℓ{\displaystyle \ell }repetitions, a soundness errorϵ{\displaystyle \epsilon }will be reduced toϵℓ{\displaystyle \epsilon ^{\ell }}.[1] The complexity classNPmay be viewed as a very simple proof system. In this system, the verifier is a deterministic, polynomial-time machine (aPmachine). The protocol is: In the case where a valid proof certificate exists, the prover is always able to make the verifier accept by giving it that certificate. In the case where there is no valid proof certificate, however, the input is not in the language, and no prover, however malicious it is, can convince the verifier otherwise, because any proof certificate will be rejected. Although NP may be viewed as using interaction, it wasn't until 1985 that the concept of computation through interaction was conceived (in the context of complexity theory) by two independent groups of researchers. One approach, byLászló Babai, who published "Trading group theory for randomness",[2]defined theArthur–Merlin(AM) class hierarchy. In this presentation, Arthur (the verifier) is aprobabilistic, polynomial-time machine, while Merlin (the prover) has unbounded resources. The classMAin particular is a simple generalization of the NP interaction above in which the verifier is probabilistic instead of deterministic. Also, instead of requiring that the verifier always accept valid certificates and reject invalid certificates, it is more lenient: This machine is potentially more powerful than an ordinary NPinteraction protocol, and the certificates are no less practical to verify, sinceBPPalgorithms are considered as abstracting practical computation (seeBPP). In apublic coinprotocol, the random choices made by the verifier are made public. They remain private in a private coin protocol. In the same conference where Babai defined his proof system forMA,Shafi Goldwasser,Silvio MicaliandCharles Rackoff[3]published a paper defining the interactive proof systemIP[f(n)]. This has the same machines as theMAprotocol, except thatf(n)roundsare allowed for an input of sizen. In each round, the verifier performs computation and passes a message to the prover, and the prover performs computation and passes information back to the verifier. At the end the verifier must make its decision. For example, in anIP[3] protocol, the sequence would be VPVPVPV, where V is a verifier turn and P is a prover turn. In Arthur–Merlin protocols, Babai defined a similar classAM[f(n)] which allowedf(n) rounds, but he put one extra condition on the machine: the verifier must show the prover all the random bits it uses in its computation. The result is that the verifier cannot "hide" anything from the prover, because the prover is powerful enough to simulate everything the verifier does if it knows what random bits it used. This is called apublic coinprotocol, because the random bits ("coin flips") are visible to both machines. TheIPapproach is called aprivate coinprotocol by contrast. The essential problem with public coins is that if the prover wishes to maliciously convince the verifier to accept a string which is not in the language, it seems like the verifier might be able to thwart its plans if it can hide its internal state from it. This was a primary motivation in defining theIPproof systems. In 1986, Goldwasser andSipser[4]showed, perhaps surprisingly, that the verifier's ability to hide coin flips from the prover does it little good after all, in that an Arthur–Merlin public coin protocol with only two more rounds can recognize all the same languages. The result is that public-coin and private-coin protocols are roughly equivalent. In fact, as Babai shows in 1988,AM[k]=AMfor all constantk, so theIP[k] have no advantage overAM.[5] To demonstrate the power of these classes, consider thegraph isomorphism problem, the problem of determining whether it is possible to permute the vertices of one graph so that it is identical to another graph. This problem is inNP, since the proof certificate is the permutation which makes the graphs equal. It turns out that thecomplementof the graph isomorphism problem, a co-NPproblem not known to be inNP, has anAMalgorithm and the best way to see it is via a private coins algorithm.[6] Private coins may not be helpful, but more rounds of interaction are helpful. If we allow the probabilistic verifier machine and the all-powerful prover to interact for a polynomial number of rounds, we get the class of problems calledIP. In 1992,Adi Shamirrevealed in one of the central results of complexity theory thatIPequalsPSPACE, the class of problems solvable by an ordinarydeterministic Turing machinein polynomial space.[7] If we allow the elements of the system to usequantum computation, the system is called aquantum interactive proof system, and the corresponding complexity class is calledQIP.[8]A series of results culminated in a 2010 breakthrough thatQIP=PSPACE.[9][10] Not only can interactive proof systems solve problems not believed to be inNP, but under assumptions about the existence ofone-way functions, a prover can convince the verifier of the solution without ever giving the verifier information about the solution. This is important when the verifier cannot be trusted with the full solution. At first it seems impossible that the verifier could be convinced that there is a solution when the verifier has not seen a certificate, but such proofs, known aszero-knowledge proofsare in fact believed to exist for all problems inNPand are valuable incryptography. Zero-knowledge proofs were first mentioned in the original 1985 paper onIPby Goldwasser, Micali and Rackoff for specific number theoretic languages. The extent of their power was however shown byOded Goldreich,Silvio MicaliandAvi Wigderson.[6]for all ofNP, and this was first extended byRussell ImpagliazzoandMoti Yungto allIP.[11] One goal ofIP's designers was to create the most powerful possible interactive proof system, and at first it seems like it cannot be made more powerful without making the verifier more powerful and so impractical. Goldwasser et al. overcame this in their 1988 "Multi prover interactive proofs: How to remove intractability assumptions", which defines a variant ofIPcalledMIPin which there aretwoindependent provers.[12]The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier into accepting a string not in the language if there is another prover it can double-check with. In fact, this is so helpful that Babai, Fortnow, and Lund were able to show thatMIP=NEXPTIME, the class of all problems solvable by anondeterministicmachine inexponential time, a very large class.[13]NEXPTIME contains PSPACE, and is believed to strictly contain PSPACE. Adding a constant number of additional provers beyond two does not enable recognition of any more languages. This result paved the way for the celebratedPCP theorem, which can be considered to be a "scaled-down" version of this theorem. MIPalso has the helpful property that zero-knowledge proofs for every language inNPcan be described without the assumption of one-way functions thatIPmust make. This has bearing on the design of provably unbreakable cryptographic algorithms.[12]Moreover, aMIPprotocol can recognize all languages inIPin only a constant number of rounds, and if a third prover is added, it can recognize all languages inNEXPTIMEin a constant number of rounds, showing again its power overIP. It is known that for any constantk, a MIP system withkprovers and polynomially many rounds can be turned into an equivalent system with only 2 provers, and a constant number of rounds.[14] While the designers ofIPconsidered generalizations of Babai's interactive proof systems, others considered restrictions. A very useful interactive proof system isPCP(f(n),g(n)), which is a restriction ofMAwhere Arthur can only usef(n) random bits and can only examineg(n) bits of the proof certificate sent by Merlin (essentially usingrandom access). There are a number of easy-to-prove results about variousPCPclasses.⁠PCP(0,poly){\displaystyle {\mathsf {PCP}}(0,{\mathsf {poly}})}⁠, the class of polynomial-time machines with no randomness but access to a certificate, is justNP.⁠PCP(poly,0){\displaystyle {\mathsf {PCP}}({\mathsf {poly}},0)}⁠, the class of polynomial-time machines with access to polynomially many random bits isco-RP. Arora and Safra's first major result was that⁠PCP(log,log)=NP{\displaystyle {\mathsf {PCP}}(\log ,\log )={\mathsf {NP}}}⁠; put another way, if the verifier in theNPprotocol is constrained to choose only⁠O(log⁡n){\displaystyle O(\log n)}⁠bits of the proof certificate to look at, this won't make any difference as long as it has⁠O(log⁡n){\displaystyle O(\log n)}⁠random bits to use.[15] Furthermore, thePCP theoremasserts that the number of proof accesses can be brought all the way down to a constant. That is,⁠NP=PCP(log,O(1)){\displaystyle {\mathsf {NP}}={\mathsf {PCP}}(\log ,O(1))}⁠.[16]They used this valuable characterization ofNPto prove thatapproximation algorithmsdo not exist for the optimization versions of certainNP-completeproblems unlessP = NP. Such problems are now studied in the field known ashardness of approximation.
https://en.wikipedia.org/wiki/Interactive_proof_system
Harmonic grammaris a linguistic model proposed byGeraldine Legendre,Yoshiro Miyata, andPaul Smolenskyin 1990. It is aconnectionistapproach to modeling linguisticwell-formedness. During the late 2000s and early 2010s, the term 'harmonic grammar' has been used to refer more generally to models of language that use weighted constraints, including ones that are not explicitly connectionist – see e.g. Pater (2009) and Potts et al. (2010).
https://en.wikipedia.org/wiki/Harmonic_grammar
Windows 10is a major release of theWindows NToperating systemdeveloped byMicrosoft. Microsoft described Windows 10 as an "operating system as a service" that would receive ongoing updates to its features and functionality, augmented with the ability for enterprise environments to receive non-critical updates at a slower pace or use long-term support milestones that will only receive critical updates, such as security patches, over their five-year lifespan of mainstream support. It was released in July 2015. Windows 10 InsiderPreview builds are delivered to Insiders in three different channels (previously "rings").[1]Insiders in the Dev Channel (previouslyFast ring) receive updates prior to those in the Beta Channel (previouslySlow ring), but might experience more bugs and other issues.[2][3]Insiders in the Release Preview Channel (previouslyRelease Preview ring) do not receive updates until the version is almost available to the public, but are comparatively more stable.[4] Mainstream builds of Windows 10 are labeled "YYMM", with YY representing the two-digit year and MM representing the month of planned release (for example, version 1507 refers to builds which initially released in July 2015). Starting with version 20H2, Windows 10 release nomenclature changed from the year and month pattern to a year and half-year pattern (YYH1, YYH2).[5] The second stable build of Windows10 isversion 1511(build number 10586), known as theNovember Update. It was codenamed "Threshold 2" (TH2) during development. This version was distributed via Windows Update on November 12, 2015. It contains various improvements to the operating system, its user interface, bundled services, as well as the introduction of Skype-based universal messaging apps, and the Windows Store for Business and Windows Update for Business features.[6][7][8][9] On November 21, 2015, the November Update was temporarily pulled from public distribution.[10][11]The upgrade was re-instated on November 24, 2015, with Microsoft stating that the removal was due to a bug that caused privacy and data collection settings to be reset to defaults when installing the upgrade.[12] The third stable build of Windows 10 is calledversion 1607, known as theAnniversary Update. It was codenamed "Redstone 1" (RS1) during development. This version was released on August 2, 2016, a little over one year after the first stable release of Windows 10.[13][14][15][16]The Anniversary Update was originally thought to have been set aside for two feature updates. While both were originally to be released in 2016, the second was moved into 2017 so that it would be released in concert with that year's wave of Microsoft first-party devices.[17][18][14] The Anniversary Update introduces new features such as the Windows Ink platform, which eases the ability to add stylus input support to Universal Windows Platform apps and provides a new "Ink Workspace" area with links to pen-oriented apps and features,[19][14]enhancements to Cortana's proactive functionality,[20]a dark user interface theme mode, a new version ofSkypedesigned to work with the Universal Windows Platform, improvements to Universal Windows Platform intended for video games,[13]and offline scanning usingWindows Defender.[21]The Anniversary Update also supportsWindows Subsystem for Linux, a new component that provides an environment for runningLinux-compatible binary software in anUbuntu-based user mode environment.[22] On new installations of Windows 10 on systems withSecure Bootenabled, all kernel-mode drivers issued after July 29, 2015, must be digitally signed with anExtended Validation Certificateissued by Microsoft.[23] This version is the basis for "LTSB 2016", the first upgrade to the LTSB since Windows 10's release. The first LTSB release, based on RTM (version 1507), has been retroactively named "LTSB 2015". The fourth stable build of Windows 10 is calledversion 1703, known as theCreators Update. It was codenamed "Redstone 2" (RS2) during development. This version was announced on October 26, 2016,[24][25]and was released forgeneral availabilityon April 11, 2017,[26][27]and for manual installation via Windows 10 Upgrade Assistant and Media Creation Tool tools on April 5, 2017.[28]This update primarily focuses on content creation, productivity, and gaming features—with a particular focus onvirtualandaugmented reality(includingHoloLensandvirtual reality headsets) and on aiding the generation of three-dimensional content. It supports a new virtual reality workspace designed for use with headsets; Microsoft announced that several OEMs planned to release VR headsets designed for use with the Creators Update.[27][26][29] Controls for the Game Bar and Game DVR feature have moved to the Settings app, while a new "Game Mode" option allows resources to be prioritized towards games.[30]Integration with Microsoft acquisitionMixer(formerly Beam)[31]was added for live streaming.[30]The themes manager moved to Settings app, and custom accent colors are now possible.[30]The new appPaint 3Dallows users to produce artwork using 3D models; the app is designed to make 3D creation more accessible to mainstream users.[32] Windows 10's privacy settings have more detailed explanations of data that the operating system may collect. Additionally, the "enhanced" level of telemetry collection was removed.[30]Windows Update notifications may now be "snoozed" for a period of time, the "active hours" during which Windows will not try to install updates may now extend up to 18 hours in length, and updates may be paused for up to seven days.[30]Windows Defender has been replaced by the universal appWindows Defender Security Center.[30]Devices may optionally be configured to prevent use of software from outside of Microsoft Store, or warn before installation of apps from outside of Microsoft Store.[33]"Dynamic Lock" allows a device to automatically lock if it is outside of the proximity of a designatedBluetoothdevice, such as a smartphone.[34]A "Night Light" feature was added, which allows the user to change thecolor temperatureof the display to the red part of the spectrum at specific times of day (similarly to the third-party softwaref.lux).[35] The fifth stable build of Windows 10 is calledversion 1709, known as theFall Creators Update. It was codenamed "Redstone 3" (RS3) during development. This version was released on October 17, 2017.[36][37][38]Version 1709 introduces a new feature known as "My People", where shortcuts to "important" contacts can be displayed on the taskbar. Notifications involving these contacts appear above their respective pictures, and users can communicate with the contact via eitherSkype, e-mail, or text messaging (integrating withAndroidandWindows 10 Mobiledevices). Support for additional services, including Xbox,Skype for Business, and third-party integration, are to be added in the future. Files can also be dragged directly to the contact's picture to share them.[39]My People was originally announced for Creators Update, but was ultimately held over to the next release,[40][41]and made its first public appearance in Build 16184 in late April 2017.[37]A new "Files-on-Demand" feature for OneDrive serves as a partial replacement for the previous "placeholders" function.[42] It also introduces a new security feature known as "controlled folder access", which can restrict the applications allowed to access specific folders. This feature is designed mainly to defend against file-encryptingransomware.[43]This is also the first release that introduces DCH drivers.[citation needed] The sixth stable build of Windows 10 is calledversion 1803, known as theApril 2018 Update. It was codenamed "Redstone 4" (RS4) during development. This version was released as a manual download on April 30, 2018, with a broad rollout on May 8, 2018.[44][45]This update was originally meant to be released on April 10, but was delayed because of a bug which could increase chances of a "Blue Screen of Death" (Stop error).[46] The most significant feature of this build is Timeline, which is displayed within Task View. It allows users to view a list of recently used documents and websites from supported applications ("activities"). When users consent to Microsoft data collection viaMicrosoft Graph, activities can also be synchronized from supportedAndroidandiOSdevices.[47][48][49][42] The seventh stable build of Windows 10 is calledversion 1809, known as theOctober 2018 Update. It was codenamed "Redstone 5" (RS5) during development. This version was released on October 2, 2018.[50]Highlighted features on this build include updates to the clipboard function (including support for clipboard history and syncing with other devices),SwiftKeyvirtual keyboard, Snip & Sketch, and File Explorer supporting the dark color scheme mode.[51] On October 6, 2018, the build was pulled by Microsoft following isolated reports of the update process deleting files from user directories.[52]It was re-released to Windows Insider channel on October 9, with Microsoft citing a bug in OneDrive's Known Folder Redirection function as the culprit.[53][54] On November 13, 2018, Microsoft resumed the rollout of 1809 for a small percentage of users.[55][56] The long term servicing release, Windows 10 Enterprise 2019 LTSC, is based on this version and is equivalent in terms of features.[57] The eighth stable build of Windows 10,version 1903, codenamed "19H1", was released for general availability on May 21, 2019, after being on the Insider Release Preview branch since April 8, 2019.[58]Because of new practices introduced after the problems affecting the 1809 update, Microsoft used an intentionally slower Windows Update rollout process.[59][60][61] New features in the update include a redesigned search tool—separated from Cortana and oriented towards textual queries, a new "Light" theme (set as default on Windows 10Home) using a white-colored taskbar with dark icons, the addition of symbols andkaomojito the emoji input menu, the ability to "pause" system updates, automated "Recommended troubleshooting", integration withGoogle Chromeon Timeline via an extension, support for SMS-based authentication on accounts linked to Microsoft accounts, and the ability to run Windows desktop applications within the Windows Mixed Reality environment (previously restricted to universal apps andSteamVRonly). A new feature onPro,Education, andEnterpriseknown as Windows Sandbox allows users to run applications within a securedHyper-Venvironment.[62][63] A revamped version of Game Bar was released alongside 1903, which redesigns it into a larger overlay with a performance display, Xbox friends list and social functionality, and audio and streaming settings.[64] The ninth stable build of Windows 10,version 1909, codenamed "19H2", was released to the public on November 12, 2019, after being on the Insider Release Preview branch since August 26, 2019.[65]Unlike previous updates, this one was released as a minor service update without major new features.[66] The tenth stable build of Windows 10,version 2004, codenamed "20H1", was released to the public on May 27, 2020, after being on the Insider Release Preview branch since April 16, 2020.[67]New features included faster and easier access to Bluetooth settings and pairing, improvedKaomojis, renamable virtual desktops,DirectX 12 Ultimate, a chat-based UI for Cortana, greater integration with Android phones on the Your Phone app,Windows Subsystem for Linux 2(WSL 2; WSL 2 includes a customLinux kernel, unlike its predecessor), the ability to use Windows Hello without the need for a password, improved Windows Search with integration with File Explorer, a cloud download option to reset Windows, accessibility improvements, and the ability to view disk drive type and discrete graphics card temperatures in Task Manager.[68][69] The eleventh stable build of Windows 10,version 20H2, was released to the public on October 20, 2020, after being on the Beta Channel since June 16, 2020.[70]New features include new theme-aware tiles in the Start Menu, new features and improvements toMicrosoft Edge(such as a price comparison tool,Alt+Tab ↹integration for tab switching, and easy access to pinned tabs), a new out-of-box experience with more personalization for the taskbar, notifications improvements, improvements to tablet mode, improvements to Modern Device Management, and the move of the System tab in Control Panel to the About page in Settings. This is the first version of Windows 10 to include the new Chromium-based Edge browser by default.[71][72][73] The twelfth stable build of Windows 10,version 21H1, was released to the public on May 18, 2021, after being on the Beta Channel since February 17, 2021.[74]This update included multi-camera support for Windows Hello, a "News and Interests" feature on the taskbar, and performance improvements toWindows Defender Application GuardandWMIGroup Policy Service.[75] The thirteenth stable build of Windows 10,version 21H2, was released to the public on November 16, 2021, after being on the Beta Channel since July 15, 2021.[76][77]This update includedGPU computesupport in theWindows Subsystem for Linux(WSL) and Azure IoT Edge for Linux on Windows (EFLOW) deployments, a new simplifiedpasswordlessdeployment models for Windows Hello for Business, support forWPA3Hash-to-Element (H2E) standards and a new highlights feature for Search on the taskbar. The fourteenth and final stable build of Windows 10,version 22H2, was released to the public on October 18, 2022, after being on the Release Preview Channel since July 28, 2022.[78][79][80]This update re-introduced the search box on the taskbar and includedCopilotin Windows, richer weather experience on the lock screen, additional quick status (such as sports, traffic and finance) on lock screen and a newWindows Spotlightdesktop theme and new account manager experience on the Start menu. On December 16, 2019, Microsoft announced that Windows Insiders in the Fast ring will receive builds directly from thers_prereleasebranch, which are not matched to a specific Windows 10 release. The first build released under the new strategy, build 19536, was made available to Insiders on the same day.[81] Themn_releasebranch was available from May 13, 2020, to June 17, 2020.[82][83]The branch was mandatory for Insiders in the Fast ring.[83] As of June 15, 2020, Microsoft has introduced the "channels" model to its Windows Insider Program, succeeding its "ring" model.[106]All future builds starting from build 10.0.20150, therefore, would be released to Windows Insiders in the Dev Channel.[82] Thefe_releasebranch was available from October 29, 2020, to January 6, 2021.[107][108]The branch was mandatory for Insiders until December 10. Afterward, Insiders could choose to move back to thers_prereleasebranch.[109] Theco_releasebranch was available from April 5 to June 14, 2021.[110]The branch was mandatory for Insiders. As of June 28, 2021, the Dev Channel has transitioned toWindows 11.[111]
https://en.wikipedia.org/wiki/Windows_10_version_history
Text mining,text data mining(TDM) ortext analyticsis the process of deriving high-qualityinformationfromtext. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources."[1]Written resources may includewebsites,books,emails,reviews, and articles.[2]High-quality information is typically obtained by devising patterns and trends by means such asstatistical pattern learning. According to Hotho et al. (2005), there are three perspectives of text mining:information extraction,data mining, andknowledge discovery in databases(KDD).[3]Text mining usually involves the process of structuring the input text (usuallyparsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into adatabase), deriving patterns within thestructured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination ofrelevance,novelty, and interest. Typical text mining tasks includetext categorization,text clustering, concept/entity extraction, production of granular taxonomies,sentiment analysis,document summarization, andentity relation modeling(i.e., learning relations betweennamed entities). Text analysis involvesinformation retrieval,lexical analysisto study word frequency distributions,pattern recognition,tagging/annotation,information extraction,data miningtechniques including link and association analysis,visualization, andpredictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via the application ofnatural language processing(NLP), different types ofalgorithmsand analytical methods. An important phase of this process is the interpretation of the gathered information. A typical application is to scan a set of documents written in anatural languageand either model thedocumentset forpredictive classificationpurposes or populate a database or search index with the information extracted. Thedocumentis the basic element when starting with text mining. Here, we define a document as a unit of textual data, which normally exists in many types of collections.[4] Text analyticsdescribes a set oflinguistic,statistical, andmachine learningtechniques that model and structure the information content of textual sources forbusiness intelligence,exploratory data analysis,research, or investigation.[5]The term is roughly synonymous with text mining; indeed,Ronen Feldmanmodified a 2000 description of "text mining"[6]in 2004 to describe "text analytics".[7]The latter term is now used more frequently in business settings while "text mining" is used in some of the earliest application areas, dating to the 1980s,[8]notably life-sciences research and government intelligence. The term text analytics also describes that application of text analytics to respond to business problems, whether independently or in conjunction with query and analysis of fielded, numerical data. It is a truism that 80% of business-relevant information originates inunstructuredform, primarily text.[9]These techniques and processes discover and present knowledge – facts,business rules, and relationships – that is otherwise locked in textual form, impenetrable to automated processing. Subtasks—components of a larger text-analytics effort—typically include: Text mining technology is now broadly applied to a wide variety of government, research, and business needs. All these groups may use text mining for records management and searching documents relevant to their daily activities. Legal professionals may use text mining fore-discovery, for example. Governments and military groups use text mining fornational securityand intelligence purposes. Scientific researchers incorporate text mining approaches into efforts to organize large sets of text data (i.e., addressing the problem ofunstructured data), to determine ideas communicated through text (e.g.,sentiment analysisinsocial media[15][16][17]) and to supportscientific discoveryin fields such as thelife sciencesandbioinformatics. In business, applications are used to supportcompetitive intelligenceand automatedad placement, among numerous other activities. Many text mining software packages are marketed forsecurity applications, especially monitoring and analysis of online plain text sources such asInternet news,blogs, etc. fornational securitypurposes.[18]It is also involved in the study of textencryption/decryption. A range of text mining applications in the biomedical literature has been described,[20]including computational approaches to assist with studies inprotein docking,[21]protein interactions,[22][23]and protein-disease associations.[24]In addition, with large patient textual datasets in the clinical field, datasets of demographic information in population studies and adverse event reports, text mining can facilitate clinical studies and precision medicine. Text mining algorithms can facilitate the stratification and indexing of specific clinical events in large patient textual datasets of symptoms, side effects, and comorbidities from electronic health records, event reports, and reports from specific diagnostic tests.[25]One online text mining application in the biomedical literature isPubGene, a publicly accessiblesearch enginethat combines biomedical text mining with network visualization.[26][27]GoPubMedis a knowledge-based search engine for biomedical texts. Text mining techniques also enable us to extract unknown knowledge from unstructured documents in the clinical domain[28] Text mining methods and software is also being researched and developed by major firms, includingIBMandMicrosoft, to further automate the mining and analysis processes, and by different firms working in the area of search and indexing in general as a way to improve their results. Within the public sector, much effort has been concentrated on creating software for tracking and monitoringterrorist activities.[29]For study purposes,Weka softwareis one of the most popular options in the scientific world, acting as an excellent entry point for beginners. For Python programmers, there is an excellent toolkit calledNLTKfor more general purposes. For more advanced programmers, there's also theGensimlibrary, which focuses on word embedding-based text representations. Text mining is being used by large media companies, such as theTribune Company, to clarify information and to provide readers with greater search experiences, which in turn increases site "stickiness" and revenue. Additionally, on the back end, editors are benefiting by being able to share, associate and package news across properties, significantly increasing opportunities to monetize content. Text analytics is being used in business, particularly, in marketing, such as incustomer relationship management.[30]Coussement and Van den Poel (2008)[31][32]apply it to improvepredictive analyticsmodels for customer churn (customer attrition).[31]Text mining is also being applied in stock returns prediction.[33] Sentiment analysismay involve analysis of products such as movies, books, or hotel reviews for estimating how favorable a review is for the product.[34]Such an analysis may need a labeled data set or labeling of theaffectivityof words. Resources for affectivity of words and concepts have been made forWordNet[35]andConceptNet,[36]respectively. Text has been used to detect emotions in the related area of affective computing.[37]Text based approaches to affective computing have been used on multiple corpora such as students evaluations, children stories and news stories. The issue of text mining is of importance to publishers who hold largedatabasesof information needingindexingfor retrieval. This is especially true in scientific disciplines, in which highly specific information is often contained within the written text. Therefore, initiatives have been taken such asNature'sproposal for an Open Text Mining Interface (OTMI) and theNational Institutes of Health's common Journal PublishingDocument Type Definition(DTD) that would provide semantic cues to machines to answer specific queries contained within the text without removing publisher barriers to public access. Academic institutions have also become involved in the text mining initiative: Computational methods have been developed to assist with information retrieval from scientific literature. Published approaches include methods for searching,[41]determining novelty,[42]and clarifyinghomonyms[43]among technical reports. The automatic analysis of vast textual corpora has created the possibility for scholars to analyze millions of documents in multiple languages with very limited manual intervention. Key enabling technologies have been parsing,machine translation, topiccategorization, and machine learning. The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analyzed by using tools from network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes.[45]This automates the approach introduced by quantitative narrative analysis,[46]wherebysubject-verb-objecttriplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.[44] Content analysishas been a traditional part of social sciences and media studies for a long time. The automation of content analysis has allowed a "big data" revolution to take place in that field, with studies in social media and newspaper content that include millions of news items.Gender bias,readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents.[47][48][49][50][51]The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al.[52]showing how different topics have different gender biases and levels of readability; the possibility to detect mood patterns in a vast population by analyzing Twitter content was demonstrated as well.[53][54] Text mining computer programs are available from manycommercialandopen sourcecompanies and sources. UnderEuropean copyrightanddatabase laws, the mining of in-copyright works (such as byweb mining) without the permission of the copyright owner is illegal. In the UK in 2014, on the recommendation of theHargreaves review, the government amended copyright law[55]to allow text mining as alimitation and exception. It was the second country in the world to do so, followingJapan, which introduced a mining-specific exception in 2009. However, owing to the restriction of theInformation Society Directive(2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law does not allow this provision to be overridden by contractual terms and conditions. TheEuropean Commissionfacilitated stakeholder discussion on text anddata miningin 2013, under the title of Licenses for Europe.[56]The fact that the focus on the solution to this legal issue was licenses, and not limitations and exceptions to copyright law, led representatives of universities, researchers, libraries, civil society groups andopen accesspublishers to leave the stakeholder dialogue in May 2013.[57] US copyright law, and in particular itsfair useprovisions, means that text mining in America, as well as other fair use countries such as Israel, Taiwan and South Korea, is viewed as being legal. As text mining is transformative, meaning that it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of theGoogle Book settlementthe presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one such use being text and data mining.[58] There is no exception incopyright law of Australiafor text or data mining within theCopyright Act 1968. TheAustralian Law Reform Commissionhas noted that it is unlikely that the "research and study"fair dealingexception would extend to cover such a topic either, given it would be beyond the "reasonable portion" requirement.[59] Until recently, websites most often used text-based searches, which only found documents containing specific user-defined words or phrases. Now, through use of asemantic web, text mining can find content based on meaning and context (rather than just by a specific word). Additionally, text mining software can be used to build large dossiers of information about specific people and events. For example, large datasets based on data extracted from news reports can be built to facilitate social networks analysis orcounter-intelligence. In effect, the text mining software may act in a capacity similar to anintelligence analystor research librarian, albeit with a more limited scope of analysis. Text mining is also used in some emailspam filtersas a way of determining the characteristics of messages that are likely to be advertisements or other unwanted material. Text mining plays an important role in determining financialmarket sentiment.
https://en.wikipedia.org/wiki/Text_mining
Morisita's overlap index, named afterMasaaki Morisita, is astatistical measure of dispersionof individuals in a population. It is used to compare overlap amongsamples(Morisita 1959). This formula is based on the assumption that increasing the size of the samples will increase the diversity because it will include different habitats (i.e. different faunas). Formula: CD= 0 if the two samples do not overlap in terms of species, andCD= 1 if the species occur in the same proportions in both samples.[citation needed] Horn's modification of the index is (Horn 1966): Note, not to be confused withMorisita’s index of dispersion. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Morisita%27s_overlap_index
Inlinguistics, aform-meaning mismatchis a natural mismatch between thegrammatical formand its expectedmeaning. Such form-meaning mismatches happen everywhere in language.[1]Nevertheless, there is often an expectation of a one-to-one relationship between meaning and form, and indeed, manytraditionaldefinitions are based on such an assumption. For example, Verbscome in threetenses:past,present, andfuture. The past is used to describe things that have already happened (e.g.,earlier in the day, yesterday, last week, three years ago). The present tense is used to describe things that are happening right now, or things that are continuous. The future tense describes things that have yet to happen (e.g.,later, tomorrow, next week, next year, three years from now).[2] While this accurately captures the typical behaviour of these three tenses, it's not unusual for a futurate meaning to have a present tense form (I'll see you before Igo) or a past tense form (If youcouldhelp, that would be great). There are three types of mismatch.[3] Syncretism is "the relation between words which have different morphosyntactic features but are identical in form."[4]For example, the English first persongenitivepronounsare distinct for dependentmyand independentmine, but forhe, there is syncretism: the dependent and independent pronouns share the formhis(e.g.,that'shisbook;it'shis). As a result, there is no consistent match between the form and function of the word. Similarly,Slovaknouns typically markcaseas in the word for "dog", which ispesinnominativecase butpsainaccusative. Butslovo"word" the nominative and accusative have come to share the same form, which means that it does not reliably indicate whether it is a subject or an object.[5] Thesubjectof a sentence is often defined as a noun phrase that denotes the semanticagentor "the doer of the action".[6][p. 69] a noun, noun phrase, or pronoun that usually comes before a main verb and represents the person or thing that performs the action of the verb, or about which something is stated.[7] But in many cases, the subject does not express the expected meaning of doer.[6][p. 69] Dummythereinthere's a book on the table, is the grammatical subject, butthereisn't the doer of the action or the thing about which something is stated. In fact it has no semantic role at all. The same is true ofitinit's cold today.[6][p. 252] In the case of object raising, theobjectof one verb can be the agent of another verb. For example, inwe expectJJto arrive at 2:00,JJis the object ofexpect, butJJis also the person who will be doing the arriving.[6][p. 221]Similarly, in Japanese, the potential form of verbs can raise the object of the main verb to the subject position. For example, in the sentence 私は寿司が食べられる (Watashi wa sushi ga taberareru, "I can eat sushi"), 寿司 ("sushi") is the object of the verb 食べる ("eat") but functions as the subject of the potential form verb 食べられる ("be able to eat").[8] From a semantic point of view, a definitenoun phraseis one that is identifiable and activated in the minds of thefirst personand the addressee. From a grammatical point of view in English, definiteness is typically marked by definitedeterminers, such asthis. “The theoretical distinction between grammatical definiteness and cognitive identifiability has the advantage of enabling us to distinguish between a discrete (grammatical) and a non-discrete (cognitive) category”[9][p. 84]So, in a case such asI metthis guy from Heidlebergon the train, the underlined noun phrase is grammatically definite but semantically indefinite;[9][p. 82]there is a form-meaning mismatch. Grammatical numberis typically marked on nouns in English, and present-tense verbs showagreementwith the subject. But there are cases of mismatch, such as with a singularcollective nounas the subject and plural agreement on the verb (e.g.,The team are working hard).[6][p. 89]The pronounyoualso triggers plural agreement regardless of whether it refers to one person or more (e.g.,Youarethe only one who can do this).[10]This is similar to the use ofhonorificconstructions in theToda language, where subject-verb agreement for number is generally marked by different verb conjugations, but there are exceptions with certain honorific forms. For example, consider the following verb forms for the verb "to give" in Toda: In the case of the honorific formkwēśt-, there is a form-meaning mismatch regarding number, as the same form is used to show respect to a single person or multiple people.[11] In some cases, the mismatch may be apparent rather than real due to a poorly chosen term. For example, "plural" in English suggest more than one, but "non-singular" may be a better term. We use plural marking for things less than one (e.g.,0.5 calories) or even for nothing at all (e.g.,zero degrees).[12] In some cases, thegrammatical genderof a word appears to be a mismatch with its meaning. For example, inGerman,das Fräuleinmeans the unmarried woman. A woman is naturally feminine in terms of socialgender, but the word here is neuter gender.[13] Also, inChichewa, a Bantu language, the word for "child" ismwaná(class 1) in the singular andaná(class 2) in the plural. When referring to a group of mixed-gender children, the plural form,aná, is used even though it belongs to a different noun class from that of the singular form,mwaná.[14] German and English compounds are quite different syntactically, but not semantically.[15] Form-meaning mismatches can lead to language change. An example of this is the split of the nominalgerundconstruction in English and a new “non-nominal” reference type becoming the most dominant function of the verbal gerund construction.[16] The syntax-semantics interface is one of the most vulnerable aspects in L2 acquisition. Therefore, L2 speakers are found to either often have incomplete grammar, or have highly variable syntactic-semantic awareness and performance.[17] Inmorphology, a morpheme can get trapped and eliminated. Consider this example: theOld Norwegianfor "horse's" washert-s, and the way to mark that as definite and genitive ("the" + GEN) was-in-s. When those went together, the genitive ofhert-swas lost, and the result ishest-en-s("the horse" + GEN) in modern Norwegian.[18][p. 90]The result is a form-meaning mismatch.
https://en.wikipedia.org/wiki/Form-meaning_mismatch
This list compiles notable works that explore the history and development of number systems across various civilizations and time periods. These works cover topics ranging from ancient numeral systems and arithmetic methods to the evolution of mathematical notations and the impact of numerals on science, trade, and culture. Number systems have been central to the development of human civilization, enabling record-keeping, commerce, astronomy, and scientific advancement. Early systems such as tally marks and Roman numerals gradually gave way to more abstract and efficient representations like the Babylonian base-60 system and the Hindu–Arabic numerals, now standard worldwide. The invention of zero, positional notation, and symbolic mathematics has had profound philosophical and technological implications.
https://en.wikipedia.org/wiki/List_of_books_on_history_of_number_systems
Signed zeroiszerowith an associatedsign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are equivalent. However, incomputing, some number representations allow for the existence of two zeros, often denoted by−0(negative zero) and+0(positive zero), regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in thesign-magnitudeandones' complementsigned number representationsfor integers, and in mostfloating-point numberrepresentations. The number 0 is usually encoded as +0, but can still be represented by +0, −0, or 0. TheIEEE 754standard for floating-point arithmetic (presently used by most computers and programming languages that support floating-point numbers) requires both +0 and −0. Real arithmetic with signed zeros can be considered a variant of theextended real number linesuch that+1⁄−0= −∞ and+1⁄+0= +∞;divisionisundefinedonly for+±0⁄±0and+±∞⁄±∞. Negatively signed zero echoes themathematical analysisconcept of approaching 0 from below as aone-sided limit, which may be denoted byx→ 0−,x→ 0−, orx→ ↑0. The notation "−0" may be used informally to denote a negative number that has beenroundedto zero. The concept of negative zero also has some theoretical applications instatistical mechanicsand other disciplines. It is claimed that the inclusion of signed zero in IEEE 754 makes it much easier to achieve numerical accuracy in some critical problems,[1]in particular when computing withcomplexelementary functions.[2]On the other hand, the concept of signed zero runs contrary to the usual assumption made in mathematics that negative zero is the same value as zero. Representations that allow negative zero can be a source of errors in programs, if software developers do not take into account that while the two zero representations behave as equal under numeric comparisons, they yield different results in some operations. Binary integer formats can usevarious encodings. In the widely usedtwo's complementencoding, zero is unsigned. In a 1+7-bitsign-and-magnituderepresentation for integers, negative zero is represented by the bit string1000 0000. In an 8-bitones' complementrepresentation, negative zero is represented by the bit string1111 1111. In all these three encodings, positive or unsigned zero is represented by0000 0000. However, the latter two encodings (with a signed zero) are uncommon for integer formats. The most common formats with a signed zero are floating-point formats (IEEE 754formats or similar), described below. In IEEE 754 binary floating-point formats, zero values are represented by the biased exponent andsignificandboth being zero. Negative zero has the sign bit set to one. One may obtain negative zero as the result of certain computations, for instance as the result ofarithmetic underflowon a negative number (other results may also be possible), or−1.0 × 0.0, or simply as−0.0. In IEEE 754 decimal floating-point formats, a negative zero is represented by an exponent being any valid exponent in the range for the format, the true significand being zero, and the sign bit being one. The IEEE 754 floating-point standard specifies the behavior of positive zero and negative zero under various operations. The outcome may depend on the currentIEEE rounding modesettings. In systems that include both signed and unsigned zeros, the notation0+{\displaystyle 0^{+}}and0−{\displaystyle 0^{-}}is sometimes used for signed zeros. Addition and multiplication are commutative, but there are some special rules that have to be followed, which mean the usual mathematical rules for algebraic simplification may not apply. The={\displaystyle =}sign below shows the obtained floating-point results (it is not the usual equality operator). The usual rule for signs is always followed when multiplying or dividing: There are special rules for adding or subtracting signed zero: Because of negative zero (and also when the rounding mode is upward or downward), the expressions−(x−y)and(−x) − (−y), for floating-point variablesxandy, cannot be replaced byy−x. However(−0) +xcan be replaced byxwith rounding to nearest (except whenxcan be asignaling NaN). Some other special rules: Division of a non-zero number by zero sets the divide by zeroflag, and an operation producing a NaN sets the invalid operation flag. Anexception handleris called if enabled for the corresponding flag. According to the IEEE 754 standard, negative zero and positive zero should compare as equal with the usual (numerical) comparison operators, like the==operators ofCandJava. In those languages, special programming tricks may be needed to distinguish the two values: Note:Castingto integral type will not always work, especially on two's complement systems. However, some programming languages may provide alternative comparison operators that do distinguish the two zeros. This is the case, for example, of theequalsmethod in Java'sDoublewrapper class.[4] Informally, one may use the notation "−0" for a negative value that was rounded to zero. This notation may be useful when a negative sign is significant; for example, when tabulatingCelsiustemperatures, where a negative sign meansbelow freezing. Instatistical mechanics, one sometimes usesnegative temperaturesto describe systems withpopulation inversion, which can be considered to have a temperature greater than positive infinity, because the coefficient of energy in the population distribution function is −1/Temperature. In this context, a temperature of −0 is a (theoretical) temperature larger than any other negative temperature, corresponding to the (theoretical) maximum conceivable extent of population inversion, the opposite extreme to +0.[5]
https://en.wikipedia.org/wiki/Signed_zero
Thescientific methodis anempiricalmethod for acquiringknowledgethat has been referred to while doingsciencesince at least the 17th century. Historically, it was developed through the centuries from the ancient and medieval world. The scientific method involves carefulobservationcoupled with rigorousskepticism, becausecognitive assumptionscan distort the interpretation of theobservation. Scientific inquiry includes creating a testablehypothesisthroughinductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.[1][2][3] Although procedures vary acrossfields, the underlyingprocessis often similar. In more detail: the scientific method involves makingconjectures(hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions.[4]A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must befalsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.[5] While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in everyscientific inquiry(nor to the same degree), and they are not always in the same order.[6][7]Numerous discoveries have not followed the textbook model of the scientific method and chance has played a role, for instance.[8][9][10] The history of the scientific method considers changes in the methodology of scientific inquiry, not thehistory of scienceitself. The development of rules forscientific reasoninghas not been straightforward; the scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge. Different early expressions ofempiricismand the scientific method can be found throughout history, for instance with the ancientStoics,Aristotle,[11]Epicurus,[12]Alhazen,[A][a][B][i]Avicenna,Al-Biruni,[17][18]Roger Bacon[α], andWilliam of Ockham.[21] In theScientific Revolutionof the 16th and 17th centuries, some of the most important developments were the furthering ofempiricismbyFrancis BaconandRobert Hooke,[22][23]therationalistapproach described byRené Descartes, andinductivism, brought to particular prominence byIsaac Newtonand those who followed him. Experiments were advocated byFrancis Baconand performed byGiambattista della Porta,[24]Johannes Kepler,[25][d]andGalileo Galilei.[β]There was particular development aided by theoretical works by the skepticFrancisco Sanches,[27]by idealists as well as empiricistsJohn Locke,George Berkeley, andDavid Hume.[e]C. S. Peirceformulated thehypothetico-deductive modelin the 20th century, and the model has undergone significant revision since.[30] The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clearboundariesbetween science and non-science, such as "scientist" and "pseudoscience".[31]Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel, and John Stuart Mill engaged in debates over "induction" and "facts," and were focused on how to generate knowledge.[31]In the late 19th and early 20th centuries, a debate overrealismvs.antirealismwas conducted as powerful scientific theories extended beyond the realm of the observable.[32] The term "scientific method" came into popular use in the twentieth century;Dewey's 1910 book,How We Think, inspiredpopular guidelines.[33]It appeared in dictionaries and science textbooks, although there was little consensus on its meaning.[31]Although there was growth through the middle of the twentieth century,[f]by the 1960s and 1970s numerous influential philosophers of science such asThomas KuhnandPaul Feyerabendhad questioned the universality of the "scientific method," and largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice.[31]In particular,Paul Feyerabend, in the 1975 first edition of his bookAgainst Method, argued against there being any universal rules ofscience;[32]Karl Popper,[γ]and Gauch 2003,[6]disagreed with Feyerabend's claim. Later stances include physicistLee Smolin's 2013 essay "There Is No Scientific Method",[35]in which he espouses twoethical principles,[δ]andhistorian of scienceDaniel Thurs' chapter in the 2015 bookNewton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization.[36]Asmythsare beliefs,[37]they are subject to thenarrative fallacy, as pointed out by Taleb.[38]PhilosophersRobert Nolaand Howard Sankey, in their 2007 bookTheories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title ofAgainst Method, accepted certain rules of method and attempted to justify those rules with a meta methodology.[39]Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples".[40][41]But algorithmic methods, such asdisproof of existing theory by experimenthave been used sinceAlhacen(1027) and hisBook of Optics,[a]and Galileo (1638) and hisTwo New Sciences,[26]andThe Assayer,[42]which still stand as scientific method. The scientific method is the process by whichscienceis carried out.[43]As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time.[g]Historically, the development of the scientific method was critical to theScientific Revolution.[45] The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct.[4]However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles.[46]Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order. There are different ways of outlining the basic method used for scientific inquiry. Thescientific communityandphilosophers of sciencegenerally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic ofexperimental sciencesthansocial sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised.[47][48]It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions:[49][50] Each element of the scientific method is subject topeer reviewfor possible mistakes. These activities do not describe all that scientists do butapply mostly to experimental sciences(e.g., physics, chemistry, biology, and psychology). The elements above are often taught inthe educational systemas "the scientific method".[C] The scientific method is not a single recipe: it requires intelligence, imagination, and creativity.[51]In this sense, it is not a mindless set of standards and procedures to follow but is rather anongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton'sPrincipia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work. An iterative,[48]pragmatic[16]scheme of the four points above is sometimes offered as a guideline for proceeding:[52] The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again. While this schema outlines a typical hypothesis/testing method,[53]many philosophers, historians, and sociologists of science, includingPaul Feyerabend,[h]claim that such descriptions of scientific method have little relation to the ways that science is actually practiced. The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked withand indented). In 1950, it was known thatgenetic inheritancehad a mathematical description, starting with the studies ofGregor Mendel, and that DNA contained genetic information (Oswald Avery'stransforming principle).[55]But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers inBragg'slaboratory atCambridge UniversitymadeX-raydiffractionpictures of variousmolecules, starting withcrystalsofsalt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.[56] The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (Thesubjectscan also be calledunsolved problemsor theunknowns.)[C]For example,Benjamin Franklinconjectured, correctly, thatSt. Elmo's firewaselectricalinnature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may alsoentailsome definitions andobservations; these observations often demand carefulmeasurementsand/or counting can take the form of expansiveempirical research. Ascientific questioncan refer to the explanation of a specificobservation,[C]as in "Why is the sky blue?" but can also be open-ended, as in "How can Idesign a drugto cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.[57] The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference betweenpseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such ascorrelationandregression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specializedscientific instrumentssuch asthermometers,spectroscopes,particle accelerators, orvoltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement. I am not accustomed to saying anything with certainty after only one or two observations. The scientific definition of a term sometimes differs substantially from itsnatural languageusage. For example,massandweightoverlap in meaning in common discourse, but have distinct meanings inmechanics. Scientific quantities are often characterized by theirunits of measurewhich can later be described in terms of conventional physical units when communicating the work. New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example,Albert Einstein's first paper onrelativitybegins by definingsimultaneityand the means for determininglength. These ideas were skipped over byIsaac Newtonwith, "I do not definetime, space, place andmotion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations.Francis Crickcautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood.[59]In Crick's study ofconsciousness, he actually found it easier to studyawarenessin thevisual system, rather than to studyfree will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them. Linus Paulingproposed that DNA might be atriple helix.[60][61]This hypothesis was also considered byFrancis CrickandJames D. Watsonbut discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong.[62]and that Pauling would soon admit his difficulties with that structure. Ahypothesisis a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of amathematical model. Sometimes, but not always, they can also be formulated asexistential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form ofuniversal statements, stating that every instance of the phenomenon has a particular characteristic. Scientists are free to use whatever resources they have – their own creativity, ideas from other fields,inductive reasoning,Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study.Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles."[63][i]Charles Sanders Peirce, borrowing a page fromAristotle(Prior Analytics,2.25)[65]described the incipient stages ofinquiry, instigated by the "irritation of doubt" to venture a plausible guess, asabductive reasoning.[66]: II, p.290The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea.Michael Polanyimade such creativity the centerpiece of his discussion of methodology. William Glenobserves that[67] the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate ... bald suppositions and areas of vagueness. In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle.Occam's Razorserves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses. To minimize theconfirmation biasthat results from entertaining a single hypothesis,strong inferenceemphasizes the need for entertaining multiple alternative hypotheses,[68]and avoiding artifacts.[69] James D. Watson,Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.[70][71]This prediction followed from the work of Cochran, Crick and Vand[72](and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns. In their first paper, Watson and Crick also noted that thedouble helixstructure they proposed provided a simple mechanism forDNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".[73] Any useful hypothesis will enablepredictions, byreasoningincludingdeductive reasoning.[j]It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities. It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered whileformulating the hypothesis. If the predictions are not accessible by observation or experience, the hypothesis is not yettestableand so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science. For example, Einstein's theory ofgeneral relativitymakes several specific predictions about the observable structure ofspacetime, such as thatlightbends in agravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field.Arthur Eddington'sobservations made during a 1919 solar eclipsesupported General Relativity rather than Newtoniangravitation.[74] Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team fromKing's College London–Rosalind Franklin,Maurice Wilkins, andRaymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin'sphoto 51, a detailed X-ray diffraction image, which showed an X-shape[75][76]and was able to confirm the structure was helical.[77][78][k] Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to acrucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject tofurther testing.Theexperimental controlis a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed.Mill's canonscan then help us figure out what the important factor is.[82]Factor analysisis one technique for discovering the important factor in an effect. Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, adouble-blindstudy or an archaeologicalexcavation. Even taking a plane fromNew YorktoParisis an experiment that tests theaerodynamicalhypotheses used for constructing the plane. These institutions thereby reduce the research function to a cost/benefit,[83]which is expressed as money, and the time and attention of the researchers to be expended,[83]in exchange for a report to their constituents.[84]Current large instruments, such as CERN'sLarge Hadron Collider(LHC),[85]orLIGO,[86]or theNational Ignition Facility(NIF),[87]or theInternational Space Station(ISS),[88]or theJames Webb Space Telescope(JWST),[89][90]entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and theiradjunct infrastructure.[ε][91] Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work ofHipparchus(190–120 BCE), when determining a value for the precession of the Earth, whilecontrolled experimentscan be seen in the works ofal-Battani(853–929 CE)[92]andAlhazen(965–1039 CE).[93][l][b] Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing.[81]After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,[95][96][97]Watson and Crick were able to infer the essential structure ofDNAby concretemodelingof the physical shapesof thenucleotideswhich comprise it.[81][98][99]They were guided by the bond lengths which had been deduced byLinus Paulingand byRosalind Franklin's X-ray diffraction images. The scientific method is iterative. At any stage, it is possible to refine itsaccuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject. This manner of iteration can span decades and sometimes centuries.Published paperscan be built upon. For example: By 1027,Alhazen, based on his measurements of therefractionof light, was able to deduce thatouter spacewas less dense thanair, that is: "the body of the heavens is rarer than the body of air".[14]In 1079Ibn Mu'adh'sTreatise On Twilightwas able to infer that Earth's atmosphere was 50 miles thick, based onatmospheric refractionof the sun's rays.[m] This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collectedcan be archived, passed onwards and used by others.Other scientists may start their own research andenter the processat any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility. Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision;Georg Wilhelm Richmannwas killed byball lightning(1753) when attempting to replicate the 1752 kite-flying experiment ofBenjamin Franklin.[101] If an experiment cannot berepeatedto produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications ofexperimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work.[102]Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically anexperimental groupgets the treatment, such as a drug, and thecontrol groupgets a placebo.John Ioannidisin 2005 pointed out that the method being used has led to many findings that cannot be replicated.[103] The process ofpeer reviewinvolves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewedscientific journal. The specific journal that publishes the results indicates the perceived quality of the work.[n] Scientists typically are careful in recording their data, a requirement promoted byLudwik Fleck(1896–1961) and others.[104]Though not typically required, they might be requested tosupply this datato other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain.[105]To protect against bad science and fraudulent data, government research-granting agencies such as theNational Science Foundation, and science journals, includingNatureandScience, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before.Scientific data archivingcan be done at several national archives in the U.S. or theWorld Data Center. The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science. Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry.[δ]His ideas stand in the context of the scale of data–driven andbig science, which has seen increased importance of honesty and consequentlyreproducibility. His thought is that science is a community effort by those who have accreditation and are working within thecommunity. He also warns against overzealous parsimony. Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong:[106][107] "Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the game of science." Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality.[108]The nature oftruthand the discussion on how scientific statements relate to reality is best left to the article on thephilosophy of sciencehere. More immediately topical limitations show themselves in the observation of reality. It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework.[110]As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information. An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—theirintersubjectivityleading to differing conclusions.Johannes KeplerusedTycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design.[d]Another historic example here is thediscovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at.[111] Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations.[ζ]It was establishedabovehow the interpretation of empirical data is theory-laden, so neither approach is trivial. The ubiquitous element in the scientific method isempiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms ofrationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory.[113]The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims thatrevelation, political or religiousdogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth.[16][80] In 1877,[49]C. S. Peircecharacterized inquiry in general not as the pursuit of truthper sebut as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. Hispragmaticviews framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless.[o]This "hyperbolic doubt" Peirce argues against here is of course just another name forCartesian doubtassociated withRené Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted. A strong formulation of the scientific method is not always aligned with a form ofempiricismin which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use ofscientific modellingand reliance on abstract typologies and theories is normally accepted. In 2010,Hawkingsuggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the conceptmodel-dependent realism.[116] Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours.[117]The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences. Scientific methodology often directs thathypothesesbe tested incontrolledconditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy. The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as inconfirmation bias; this is aheuristicthat leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).[37] [T]he action of thought is excited by the irritation of doubt, and ceases when belief is attained. A historical example is the belief that the legs of agallopinghorse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop byEadweard Muybridgeshowed this to be false, and that the legs are instead gathered together.[118] Another important human bias that plays a role is a preference for new, surprising statements (seeAppeal to novelty), which can result in a search for evidence that the new is true.[119]Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.[120] Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard – though certainly never impossible – to overturn".[121]When a narrative is constructed its elements become easier to believe.[122][38] Fleck (1979), p. 27 notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumeda priori, or contain some other logical or methodological flaw in the process that ultimately produced them.Donald M. MacKayhas analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.[η] The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles.[123] Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions.[124] An example for how inductive and deductive reasoning works can be found in thehistory of gravitational theory.[p]It took thousands of years of measurements, from theChaldean,Indian,Persian,Greek,Arabic, andEuropeanastronomers, to fully record the motion of planetEarth.[q]Kepler(and others) were then able to build their early theories bygeneralizing the collected data inductively, andNewtonwas able to unify prior theory and measurements into the consequences of hislaws of motionin 1727.[r] Another common example of inductive reasoning is the observation of acounterexampleto current theory inducing the need for new ideas.Le Verrierin 1859 pointed out problems with theperihelionofMercurythat showed Newton's theory to be at least incomplete. The observed difference of Mercury'sprecessionbetween Newtonian theory and observation was one of the things that occurred toEinsteinas a possible early test of histheory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did.[s]Though, today'sStandard Modelof physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively. A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges. This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure. Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent.[t]Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation[34]— certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily. Measurements in scientific work are usually accompanied by estimates of theiruncertainty.[83]The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due todata collectionlimitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon thesampling methodused and the number of samples taken. In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different.Inductive statistical generalisationwill take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful,but never a complete representation of circumstances. In statistical analysis, expected and unexpected bias is a large factor.[129]Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through aprocess for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find inpeer review, after all.[u]More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context.[124]Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology. Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example,[v]multiple probabilities interacting is where, for example medical professionals,[131]have shown a lack of proper understanding.Bayes' theoremis the mathematical principle lining out how standing probabilities are adjusted given new information. Theboy or girl paradoxis a common example. In knowledge representation,Bayesian estimation of mutual informationbetweenrandom variablesis a way to measure dependence, independence, or interdependence of the information under scrutiny.[132] Beyond commonly associatedsurvey methodologyoffield research, the concept together withprobabilistic reasoningis used to advance fields of science where research objects have no definitive states of being. For example, instatistical mechanics. Thehypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation ofhypothesesand their testing viadeductive reasoning. A hypothesis stating implications, often calledpredictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested.[133]Basically, scientists will look at the hypothetical consequences a (potential)theoryholds and prove or disprove those instead of the theory itself. If anexperimentaltest of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively. Thelogicof this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it".[134]This is whyPopperinsisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. AsGilliesput it, "successful theories are those that survive elimination through falsification".[133] Deductive reasoning in this mode of inquiry will sometimes be replaced byabductive reasoning—the search for the most plausible explanation via logical inference. For example, in biology, where general laws are few,[133]as valid deductions rely on solid presuppositions.[124] Theinductivist approachto deriving scientific truth first rose to prominence withFrancis Baconand particularly withIsaac Newtonand those who followed him.[135]After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though.[133]It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning.[r] Where the traditional method of inquiry does both, the inductive approach usually formulates only aresearch question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves".[133] The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are.[133]This measure of certainty can reach quite high degrees, though. For example, in the determination of largeprimes, which are used inencryption software.[136] Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors.[133]These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further exploredbelow. Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here areMonte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful.[133] Scientific inquiry generally aims to obtainknowledgein the form oftestable explanations[137][79]that scientists can use topredictthe results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often calledscientific theories.[C] Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science.[138]Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations. Scientific knowledge is closely tied toempirical findingsand can remain subject tofalsificationif new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles. Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planetsalmost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection oflightbygravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power.[139][121] Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors.[139]For example, the theory ofevolutionexplains thediversity of life on Earth, how species adapt to their environments, and many otherpatternsobserved in the natural world;[140][141]its most recent major modification was unification withgeneticsto form themodern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such asbiochemistryandmolecular biology. During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question:What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducingcognitive bias.[142]Though different thinkers emphasize different aspects,[ι]a good theory: In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to: The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be consideredheuristicsrather than a definitive.[κ]Also, criteria such as these do not necessarily decide between alternative theories. QuotingBird:[148] "[Such criteria] cannot determine scientific choice. First, which features of a theory satisfy these criteria may be disputable (e.g.does simplicity concern the ontological commitments of a theory or its mathematical form?). Secondly, these criteria are imprecise, and so there is room for disagreement about the degree to which they hold. Thirdly, there can be disagreement about how they are to be weighted relative to one another, especially when they conflict." It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment.[149][150] Thedesiderataof a "good" theory have been debated for centuries, going back perhaps even earlier thanOccam's razor,[w]which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony.[151]Scientists go as far as to call simple proofs of complex statementsbeautiful. We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen inPaul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade hereviewed prior workwith an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored".[152]Thus touching on the need to bridge the common bias against other circles of thought.[153] Occam's razor might fall under the heading of "simple elegance", but it is arguable thatparsimonyandelegancepull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity.[147] Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation.[144] Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century.[θ]The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example inMill's Methodsof difference and agreement—methods that would be referred back to in the context of contrast and invariance.[154]But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied.[155][x]AsDavid Deutschput it in 2009: "the search for hard-to-vary explanations is the origin of all progress".[146] An example here can be found in one ofEinstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously.[x]Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity".[156] The feature, which suggests reality, is always some kind of invariance of a structure independent of the aspect, the projection. The discussion oninvariancein physics is often had in the more specific context ofsymmetry.[155]The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical. Related principles here arefalsifiabilityandtestability. The opposite of something beinghard-to-varyare theories that resist falsification—a frustration that was expressed colourfully byWolfgang Paulias them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations.[157][158] Philosophy of science looks atthe underpinning logicof the scientific method, at what separatesscience from non-science, and theethicthat is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist,[D][159]that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world.[159]These assumptions frommethodological naturalismform a basis on which science may be grounded.Logical positivist,empiricist,falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized. There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science.[λ]The one attempted by theunificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). Thepluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas. Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social environment on research. Also, there is 'scientific method' as popularised by Dewey inHow We Think(1910) and Karl Pearson inGrammar of Science(1892), as used in fairly uncritical manner in education. Scientific pluralism is a position within thephilosophy of sciencethat rejects various proposedunitiesof scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: themetaphysicsof its subject matter, theepistemologyof scientific knowledge, or theresearch methodsand models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that sincescientific disciplinesalready vary in practice, there is no reason to believe this variation is wrong until a specific unification isempiricallyproven. Finally, some hold that pluralism should be allowed fornormativereasons, even if unity were possible in theory. Unificationism, in science, was a central tenet oflogical positivism.[161][162]Different logical positivists construed this doctrine in several different ways, e.g. as areductionistthesis, that the objects investigated by thespecial sciencesreduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method.[y] Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world. The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. Paul Feyerabendexamined the history of science, and was led to deny that science is genuinely a methodological process. In his 1975 bookAgainst Methodhe argued that no description of scientific methodcould possibly be broad enoughto include all the approaches and methods used by scientists, and that there are no useful and exception-freemethodological rulesgoverning the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'.[164]As has been argued before him however, this is uneconomic;problem solvers, and researchers are to be prudent with their resources during their inquiry.[E] A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research.[166] Inscience education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science.[167][168]This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work.[169][170][171]Major organizations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning and proper understating of science includes understanding of philosophy and history, not just science in isolation.[172] How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps:[176]observation, hypothesis, prediction, experiment. This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences.[178]It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured.[173] The taught presentation of science had to defend demerits such as:[179] The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education,[179]and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods.[181]These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3dimensionsof scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas.[179] The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation.[36][170]Education's approach was heavily influenced by John Dewey's,How We Think(1910).[33]Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey).[182] The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise.[μ][i] A perhaps accessible lead into what is claimed isFleck'sthought, echoed inKuhn'sconcept ofnormal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he calledthought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group.[186] Comparably, following thefield researchin an academic scientific laboratory byLatourandWoolgar,Karin Knorr Cetinahas conducted a comparative study of two scientific fields (namelyhigh energy physicsandmolecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept.[187][z] On the idea of Fleck'sthought collectivessociologists built the concept ofsituated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views. Norwood Russell Hanson, alongsideThomas KuhnandPaul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by theobserver's conceptual framework. He used the concept ofgestaltto show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection ofGolgi bodiesas an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler.Intersubjectivityled to different conclusions.[110][d] Kuhn and Feyerabend acknowledged Hanson's pioneering work,[191][192]although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of thestrong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests. Thepostmodernistcritiques of science have themselves been the subject of intense controversy. This ongoing debate, known as thescience wars, is the result of conflicting values and assumptions betweenpostmodernistandrealistperspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth.[193] Somewhere between 33% and 50% of allscientific discoveriesare estimated to have beenstumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky.[9]Scientists themselves in the 19th and 20th century acknowledged the role of fortunate luck or serendipity in discoveries.[10]Louis Pasteuris credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected.[9][195]This is whatNassim Nicholas Talebcalls "Anti-fragility"; while some systems of investigation are fragile in the face ofhuman error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.[196] Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what theythinkis an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.[9][195] When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" byJohn Ioannidis, which is considered foundational to the field ofmetascience.[130]Much research in metascience seeks to identify poor use of statistics and improve its use, an example being themisuse of p-values.[197] The points raised are both statistical and economical. Statistically, research findings are less likely to be true when studies are small and when there is significant flexibility in study design, definitions, outcomes, and analytical approaches. Economically, the reliability of findings decreases in fields with greater financial interests, biases, and a high level of competition among research teams. As a result, most research findings are considered false across various designs and scientific fields, particularly in modern biomedical research, which often operates in areas with very low pre- and post-study probabilities of yielding true findings. Nevertheless, despite these challenges, most new discoveries will continue to arise from hypothesis-generating research that begins with low or very low pre-study odds. This suggests that expanding the frontiers of knowledge will depend on investigating areas outside the mainstream, where the chances of success may initially appear slim.[130] Science applied to complex systems can involve elements such astransdisciplinarity,systems theory,control theory, andscientific modelling. In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used withinBig data, such aspredictive analytics, may be considered to be at odds with the scientific method,[198]as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support thenull hypothesisin the predictive analytics application.Fleck (1979), pp. 38–50 notes "ascientific discovery remains incomplete without considerations of the social practicesthat condition it".[199] Science is the process of gathering, comparing, and evaluating proposed models againstobservables.A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what isknownfrom what isunknownat each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to befalsifiable(capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called aconjecture.[200] Mathematical work and scientific work can inspire each other.[42]For example, the technical concept oftimearose inscience, and timelessness was a hallmark of a mathematical topic. But today, thePoincaré conjecturehas been proved using time as a mathematical concept in which objects can flow (seeRicci flow).[201] Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure.Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such asGregory Chaitin, and others such asLakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.[202] George Pólya's work onproblem solving,[203]the construction of mathematicalproofs, andheuristic[204][205]show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps. In Pólya's view,understandinginvolves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already;analysis, which Pólya takes fromPappus,[206]involves free and heuristic construction of plausible arguments,working backward from the goal, and devising a plan for constructing the proof;synthesisis the strictEuclideanexposition of step-by-step details[207]of the proof;reviewinvolves reconsidering and re-examining the result and the path taken to it. Building on Pólya's work,Imre Lakatosargued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work.[208][ν]In like manner to science, where truth is sought, but certainty is not found, inProofs and Refutations, what Lakatos tried to establish was that no theorem ofinformal mathematicsis final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that nocounterexamplehas yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921Tractatus Logico-Philosophicus5.13; Lakatos claimed that proofs from such a system weretautological, i.e.internally logically true, byrewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. theEuler characteristic) into or out of forms fromhomology,[209]or more abstractly, fromhomological algebra.[210][211][ν] Lakatos proposed an account of mathematical knowledge based on Polya's idea ofheuristics. InProofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.[213] Gauss, when asked how he came about histheorems, once replied "durch planmässiges Tattonieren" (throughsystematic palpable experimentation).[214] The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend.
https://en.wikipedia.org/wiki/Scientific_method
Various governments require acertificationofvoting machines. In theUnited Statesthere is only a voluntary federal certification forvoting machinesand each state has ultimate jurisdiction over certification, though most states currently require national certification for the voting systems.[1] In Germany thePhysikalisch-Technische Bundesanstaltwas responsible for certification of the voting machines for federal and European elections till 2009. Since the respective law, theBundeswahlgeräteverordnung("Federal Voting Machine Ordinance") is considered to be in contradiction to Germany's Constitution, this responsibility is suspended. The only machines certified so far are theNedapESD1 and ESD2.
https://en.wikipedia.org/wiki/Certification_of_voting_machines
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Social construction of technology(SCOT) is a theory within the field ofscience and technology studies. Advocates of SCOT—that is,social constructivists—argue that technology does not determine human action, but that rather, human action shapes technology. They also argue that the ways a technology is used cannot be understood without understanding how that technology is embedded in its social context. SCOT is a response totechnological determinismand is sometimes known astechnological constructivism. SCOT draws on work done in the constructivist school of thesociology of scientific knowledge, and its subtopics includeactor-network theory(a branch of thesociology of science and technology) and historical analysis of sociotechnical systems, such as the work of historianThomas P. Hughes. Its empirical methods are an adaptation of the Empirical Programme of Relativism (EPOR), which outlines a method of analysis to demonstrate the ways in which scientific findings are socially constructed (seestrong program). Leading adherents of SCOT includeWiebe BijkerandTrevor Pinch. SCOT holds that those who seek to understand the reasons for acceptance or rejection of a technology should look to the social world. It is not enough, according to SCOT, to explain a technology's success by saying that it is "the best"—researchers must look at how the criteria of being "the best" is defined and what groups and stakeholders participate in defining it. In particular, they must ask who defines the technical criteria success is measured by, why technical criteria are defined this way, and who is included or excluded. Pinch and Bijker argue that technological determinism is a myth that results when one looks backwards and believes that the path taken to the present was the only possible path. SCOT is not only a theory, but also a methodology: it formalizes the steps and principles to follow when one wants to analyze the causes of technological failures or successes. At the point of its conception, the SCOT approach was partly motivated by the ideas of thestrong programmein the sociology of science (Bloor 1973). In their seminal article, Pinch and Bijker refer to thePrinciple of Symmetryas the most influential tenet of the Sociology of Science, which should be applied in historical and sociological investigations of technology as well. It is strongly connected to Bloor's theory of social causation. ThePrinciple of Symmetryholds that in explaining the origins of scientific beliefs, that is, assessing the success and failure of models, theories, or experiments, the historian/sociologist should deploy the samekindof explanation in the cases of success as in cases of failure. When investigating beliefs, researchers should be impartial to the (a posterioriattributed) truth or falsehood of those beliefs, and the explanations should be unbiased. The strong programme adopts a position of relativism or neutralism regarding the arguments that social actors put forward for the acceptance/rejection of any technology. All arguments (social, cultural, political, economic, as well as technical) are to be treated equally.[1] The symmetry principle addresses the problem that the historian is tempted to explain the success of successful theories by referring to their "objective truth", or inherent "technical superiority", whereas s/he is more likely to put forward sociological explanations (citing political influence or economic reasons) only in the case of failures. For example, having experienced the obvious success of the chain-driven bicycle for decades, it is tempting to attribute its success to its "advanced technology" compared to the "primitiveness" of thePenny Farthing, but if we look closely and symmetrically at their history (as Pinch and Bijker do), we can see that at the beginning bicycles were valued according to quite different standards than nowadays. The early adopters (predominantly young, well-to-do gentlemen) valued the speed, the thrill, and the spectacularity of the Penny Farthing – in contrast to the security and stability of the chain-drivenSafety Bicycle. Many other social factors (e.g., the contemporary state of urbanism and transport, women's clothing habits and feminism) have influenced and changed the relative valuations of bicycle models. A weak reading of thePrinciple of Symmetrypoints out that there often are many competing theories or technologies, which all have the potential to provide slightly different solutions to similar problems. In these cases, sociological factors tip the balance between them: that's why we should pay equal attention to them. A strong, social constructivist reading would add that even the emergence of the questions or problems to be solved are governed by social determinations, so the Principle of Symmetry is applicable even to the apparently purely technical issues. The Empirical Programme of Relativism (EPOR) introduced the SCOT theory in two stage.[2] The first stage of the SCOT research methodology is to reconstruct the alternative interpretations of the technology, analyze the problems and conflicts these interpretations give rise to, and connect them to the design features of the technological artifacts. The relations between groups, problems, and designs can be visualized in diagrams. Interpretative flexibilitymeans that each technological artifact has different meanings and interpretations for various groups. Bijker and Pinch show that the air tire of the bicycle meant a more convenient mode of transportation for some people, whereas it meant technical nuisances, traction problems and uglyaestheticsto others. In racing air tires lent to greater speed.[3] These alternative interpretations generate differentproblemsto be solved. For the bicycle, it means how features such as aesthetics, convenience, and speed should be prioritized. It also considers tradeoffs, such as between traction and speed. The most basic relevant groups are theusersand theproducersof the technological artifact, but most often many subgroups can be delineated – users with different socioeconomic status, competing producers, etc. Sometimes there are relevant groups who are neither users, nor producers of the technology, for example, journalists, politicians, and civil organizations.Trevor Pinchhas argued that the salespeople of technology should also be included in the study of technology.[4]The groups can be distinguished based on their shared or diverging interpretations of the technology in question. Just as technologies have different meanings in different social groups, there are always multiple ways of constructing technologies. A particular design is only a single point in the large field of technical possibilities, reflecting the interpretations of certain relevant groups. The different interpretations often give rise to conflicts between criteria that are hard to resolve technologically (e.g., in the case of the bicycle, one such problem was how a woman could ride the bicycle in a skirt while still adhering to standards of decency), or conflicts between the relevant groups (the "Anti-cyclists" lobbied for the banning of the bicycles). Different groups in different societies construct different problems, leading to different designs. The second stage of the SCOT methodology is to show how closure is achieved. Over time, as technologies are developed, the interpretative and design flexibility collapse through closure mechanisms. Two examples of closure mechanisms: Closure is not permanent. New social groups may form and reintroduce interpretative flexibility, causing a new round of debate or conflict about a technology. (For instance, in the 1890s automobiles were seen as the "green" alternative, a cleaner environmentally-friendly technology, to horse-powered vehicles; by the 1960s, new social groups had introduced new interpretations about the environmental effects of the automobile, eliciting the opposite conclusion.) Many other historians and sociologists of technology extended the original SCOT theory. This is often considered the third stage of the original theory. For example, Paul N. Edwards shows in his book "The Closed World: Computers and the Politics of Discourse in Cold War America"[5]the strong relations between the political discourse of the Cold War and the computer designs of this era. In 1993,Langdon Winnerpublished a critique of SCOT entitled "Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology."[6]In it, he argues that social constructivism is an overly narrow research program. He identifies the following specific limitations in social constructivism: Other critics includeStewart Russellwith his letter in the journalSocial Studies of Sciencetitled "The Social Construction of Artifacts: A Response to Pinch and Bijker". Deborah Deliyannis,Hendrik Dey, and Paolo Squatriti criticize the concept of social construction of technology for being afalse dichotomywith a technologically deterministstraw manthat ignores third, fourth and more alternatives, as well as for overlooking the process of how the technology is developed as something that can work. For example, accounting for which groups would have interests in awindmillcannot explain how a windmill is practically constructed, nor does it account for the difference between having the knowledge but for some reason not using it and lacking the knowledge altogether. This distinction between knowledge that have not yet been invented and knowledge that is merely prevented from being used by commercial, bureaucratic or other socially constructed factors, which it is argued that SCOT overlooks, is argued to explain the archaeological evidence of rich technological cultures in the aftermath of the collapse of civilizations (such as early medieval technology in the aftermath of the collapse of the Roman Empire, which was much richer than it is depicted as by the "Dark Medieval" stereotype) as a result of technology being remembered even when prevented from being used with the potential to being put into use when the artificial repression is no longer in place due tosocietal collapse.[7]
https://en.wikipedia.org/wiki/Social_construction_of_technology
Inmathematics,Poinsot's spiralsare twospiralsrepresented by thepolar equations where csch is thehyperbolic cosecant, and sech is thehyperbolic secant.[1]They are named after the French mathematicianLouis Poinsot. Thisgeometry-relatedarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Poinsot%27s_spirals
Inmathematics, afunction spaceis asetoffunctionsbetween two fixed sets. Often, thedomainand/orcodomainwill have additionalstructurewhich is inherited by the function space. For example, the set of functions from any setXinto avector spacehas anaturalvector space structure given bypointwiseaddition and scalar multiplication. In other scenarios, the function space might inherit atopologicalormetricstructure, hence the name functionspace. LetFbe afieldand letXbe any set. The functionsX→Fcan be given the structure of a vector space overFwhere the operations are defined pointwise, that is, for anyf,g:X→F, anyxinX, and anycinF, define(f+g)(x)=f(x)+g(x)(c⋅f)(x)=c⋅f(x){\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(c\cdot f)(x)&=c\cdot f(x)\end{aligned}}}When the domainXhas additional structure, one might consider instead thesubset(orsubspace) of all such functions which respect that structure. For example, ifVand alsoXitself are vector spaces overF, the set oflinear mapsX→Vform a vector space overFwith pointwise operations (often denotedHom(X,V)). One such space is thedual spaceofX: the set oflinear functionalsX→Fwith addition and scalar multiplication defined pointwise. The cardinaldimensionof a function space with no extra structure can be found by theErdős–Kaplansky theorem. Function spaces appear in various areas of mathematics: Functional analysisis organized around adequate techniques to bring function spaces astopological vector spaceswithin reach of the ideas that would apply tonormed spacesof finite dimension. Here we use the real line as an example domain, but the spaces below exist on suitable open subsetsΩ⊆Rn{\displaystyle \Omega \subseteq \mathbb {R} ^{n}} Ifyis an element of the function spaceC(a,b){\displaystyle {\mathcal {C}}(a,b)}of allcontinuous functionsthat are defined on aclosed interval[a,b], thenorm‖y‖∞{\displaystyle \|y\|_{\infty }}defined onC(a,b){\displaystyle {\mathcal {C}}(a,b)}is the maximumabsolute valueofy(x)fora≤x≤b,[2]‖y‖∞≡maxa≤x≤b|y(x)|wherey∈C(a,b){\displaystyle \|y\|_{\infty }\equiv \max _{a\leq x\leq b}|y(x)|\qquad {\text{where}}\ \ y\in {\mathcal {C}}(a,b)} is called theuniform normorsupremum norm('sup norm').
https://en.wikipedia.org/wiki/Function_space
Ingeometry, theparallel postulateis the fifth postulate inEuclid'sElementsand a distinctiveaxiominEuclidean geometry. It states that, in two-dimensional geometry: If aline segmentintersects two straightlinesforming two interior angles on the same side that are less than tworight angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles. This postulate does not specifically talk about parallel lines;[1]it is only a postulate related to parallelism. Euclid gave the definition of parallel lines in Book I, Definition 23[2]just before the five postulates.[3] Euclidean geometryis the study of geometry that satisfies all of Euclid's axioms, including the parallel postulate. The postulate was long considered to be obvious or inevitable, but proofs were elusive. Eventually, it was discovered that inverting the postulate gave valid, albeit different geometries. A geometry where the parallel postulate does not hold is known as anon-Euclidean geometry. Geometry that isindependentof Euclid's fifth postulate (i.e., only assumes the modern equivalent of the first four postulates) is known asabsolute geometry(or sometimes "neutral geometry"). Probably the best-known equivalent of Euclid's parallel postulate, contingent on his other postulates, isPlayfair's axiom, named after the ScottishmathematicianJohn Playfair, which states: In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.[4] This axiom by itself is notlogically equivalentto the Euclidean parallel postulate since there are geometries in which one is true and the other is not. However, in the presence of the remaining axioms which give Euclidean geometry, one can be used to prove the other, so they are equivalent in the context ofabsolute geometry.[5] Many other statements equivalent to the parallel postulate have been suggested, some of them appearing at first to be unrelated to parallelism, and some seeming soself-evidentthat they wereunconsciouslyassumed by people who claimed to have proven the parallel postulate from Euclid's other postulates. These equivalent statements include: However, the alternatives which employ the word "parallel" cease appearing so simple when one is obliged to explain which of the four common definitions of "parallel" is meant – constant separation, never meeting, same angles where crossed bysomethird line, or same angles where crossed byanythird line – since the equivalence of these four is itself one of the unconsciously obvious assumptions equivalent to Euclid's fifth postulate. In the list above, it is always taken to refer to non-intersecting lines. For example, if the word "parallel" in Playfair's axiom is taken to mean 'constant separation' or 'same angles where crossed by any third line', then it is no longer equivalent to Euclid's fifth postulate, and is provable from the first four (the axiom says 'There is at most one line...', which is consistent with there being no such lines). However, if the definition is taken so that parallel lines are lines that do not intersect, or that have some line intersecting them in the same angles, Playfair's axiom is contextually equivalent to Euclid's fifth postulate and is thus logically independent of the first four postulates. Note that the latter two definitions are not equivalent, because in hyperbolic geometry the second definition holds only forultraparallellines. From the beginning, the postulate came under attack as being provable, and therefore not a postulate, and for more than two thousand years, many attempts were made to prove (derive) the parallel postulate using Euclid's first four postulates.[10]The main reason that such a proof was so highly sought after was that, unlike the first four postulates, the parallel postulate is not self-evident. If the order in which the postulates were listed in the Elements is significant, it indicates that Euclid included this postulate only when he realised he could not prove it or proceed without it.[11]Many attempts were made to prove the fifth postulate from the other four, many of them being accepted as proofs for long periods until the mistake was found. Invariably the mistake was assuming some 'obvious' property which turned out to be equivalent to the fifth postulate (Playfair's axiom). Although known from the time of Proclus, this became known as Playfair's Axiom after John Playfair wrote a famous commentary on Euclid in 1795 in which he proposed replacing Euclid's fifth postulate by his own axiom. Today, over two thousand two hundred years later, Euclid's fifth postulate remains a postulate. Proclus(410–485) wrote a commentary onThe Elementswhere he comments on attempted proofs to deduce the fifth postulate from the other four; in particular, he notes thatPtolemyhad produced a false 'proof'. Proclus then goes on to give a false proof of his own. However, he did give a postulate which is equivalent to the fifth postulate. Ibn al-Haytham(Alhazen) (965–1039), anArab mathematician, made an attempt at proving the parallel postulate using aproof by contradiction,[12]in the course of which he introduced the concept ofmotionandtransformationinto geometry.[13]He formulated theLambert quadrilateral, which Boris Abramovich Rozenfeld names the "Ibn al-Haytham–Lambert quadrilateral",[14]and his attempted proof contains elements similar to those found inLambert quadrilateralsandPlayfair's axiom.[15] The Persian mathematician, astronomer, philosopher, and poetOmar Khayyám(1050–1123), attempted to prove the fifth postulate from another explicitly given postulate (based on the fourth of the fiveprinciples due to the Philosopher(Aristotle), namely, "Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge."[16]He derived some of the earlier results belonging toelliptical geometryandhyperbolic geometry, though his postulate excluded the latter possibility.[17]TheSaccheri quadrilateralwas also first considered by Omar Khayyám in the late 11th century in Book I ofExplanations of the Difficulties in the Postulates of Euclid.[14]Unlike many commentators on Euclid before and after him (includingGiovanni Girolamo Saccheri), Khayyám was not trying to prove the parallel postulate as such but to derive it from his equivalent postulate. He recognized that three possibilities arose from omitting Euclid's fifth postulate; if two perpendiculars to one line cross another line, judicious choice of the last can make the internal angles where it meets the two perpendiculars equal (it is then parallel to the first line). If those equal internal angles are right angles, we get Euclid's fifth postulate, otherwise, they must be either acute or obtuse. He showed that the acute and obtuse cases led to contradictions using his postulate, but his postulate is now known to be equivalent to the fifth postulate. Nasir al-Din al-Tusi(1201–1274), in hisAl-risala al-shafiya'an al-shakk fi'l-khutut al-mutawaziya(Discussion Which Removes Doubt about Parallel Lines) (1250), wrote detailed critiques of the parallel postulate and on Khayyám's attempted proof a century earlier. Nasir al-Din attempted to derive a proof by contradiction of the parallel postulate.[18]He also considered the cases of what are now known as elliptical and hyperbolic geometry, though he ruled out both of them.[17] Nasir al-Din's son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), wrote a book on the subject in 1298, based on his father's later thoughts, which presented one of the earliest arguments for a non-Euclidean hypothesis equivalent to the parallel postulate. "He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from theElements."[18][19]His work was published inRomein 1594 and was studied by European geometers. This work marked the starting point for Saccheri's work on the subject[18]which opened with a criticism of Sadr al-Din's work and the work of Wallis.[20] Giordano Vitale(1633–1711), in his bookEuclide restituo(1680, 1686), used the Khayyam-Saccheri quadrilateral to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant.Girolamo Saccheri(1667–1733) pursued the same line of reasoning more thoroughly, correctly obtaining absurdity from the obtuse case (proceeding, like Euclid, from the implicit assumption that lines can be extended indefinitely and have infinite length), but failing to refute the acute case (although he managed to wrongly persuade himself that he had). In 1766Johann Lambertwrote, but did not publish,Theorie der Parallellinienin which he attempted, as Saccheri did, to prove the fifth postulate. He worked with a figure that today we call aLambert quadrilateral, a quadrilateral with three right angles (can be considered half of a Saccheri quadrilateral). He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyám, and then proceeded to prove many theorems under the assumption of an acute angle. Unlike Saccheri, he never felt that he had reached a contradiction with this assumption. He had proved the non-Euclidean result that the sum of the angles in a triangle increases as the area of the triangle decreases, and this led him to speculate on the possibility of a model of the acute case on a sphere of imaginary radius. He did not carry this idea any further.[21] Where Khayyám and Saccheri had attempted to prove Euclid's fifth by disproving the only possible alternatives, the nineteenth century finally saw mathematicians exploring those alternatives and discovering thelogically consistentgeometries that result. In 1829,Nikolai Ivanovich Lobachevskypublished an account of acute geometry in an obscure Russian journal (later re-published in 1840 in German). In 1831,János Bolyaiincluded, in a book by his father, an appendix describing acute geometry, which, doubtlessly, he had developed independently of Lobachevsky.Carl Friedrich Gausshad also studied the problem, but he did not publish any of his results. Upon hearing of Bolyai's results in a letter from Bolyai's father,Farkas Bolyai, Gauss stated: If I commenced by saying that I am unable to praise this work, you would certainly be surprised for a moment. But I cannot say otherwise. To praise it would be to praise myself. Indeed the whole contents of the work, the path taken by your son, the results to which he is led, coincide almost entirely with my meditations, which have occupied my mind partly for the last thirty or thirty-five years.[22] The resulting geometries were later developed byLobachevsky,RiemannandPoincaréintohyperbolic geometry(the acute case) andelliptic geometry(the obtuse case). Theindependenceof the parallel postulate from Euclid's other axioms was finally demonstrated byEugenio Beltramiin 1868. Euclid did not postulate theconverseof his fifth postulate, which is one way to distinguish Euclidean geometry fromelliptic geometry. The Elements contains the proof of an equivalent statement (Book I, Proposition 27):If a straight line falling on two straight lines make the alternate angles equal to one another, the straight lines will be parallel to one another.AsDe Morgan[23]pointed out, this is logically equivalent to (Book I, Proposition 16). These results do not depend upon the fifth postulate, but they do require the second postulate[24]which is violated in elliptic geometry. Attempts to logically prove the parallel postulate, rather than the eighth axiom,[25]were criticized byArthur SchopenhauerinThe World as Will and Idea. However, the argument used by Schopenhauer was that the postulate is evident by perception, not that it was not a logical consequence of the other axioms.[26] The parallel postulate is equivalent to the conjunction of theLotschnittaxiomand ofAristotle's axiom.[27][28]The former states that the perpendiculars to the sides of a right angle intersect, while the latter states that there is no upper bound for the lengths of the distances from the leg of an angle to the other leg. As shown in,[29]the parallel postulate is equivalent to the conjunction of the following incidence-geometric forms of theLotschnittaxiomand ofAristotle's axiom: Given three parallel lines, there is a line that intersects all three of them. Given a lineaand two distinct intersecting linesmandn, each different froma, there exists a linegwhich intersectsaandm, but notn. The splitting of the parallel postulate into the conjunction of these incidence-geometric axioms is possible only in the presence ofabsolute geometry.[30] In effect, this method characterized parallel lines as lines always equidistant from one another and also introduced the concept of motion into geometry. "Khayyam's postulate had excluded the case of the hyperbolic geometry whereas al-Tusi's postulate ruled out both the hyperbolic and elliptic geometries." "But in a manuscript probably written by his son Sadr al-Din in 1298, based on Nasir al-Din's later thoughts on the subject, there is a new argument based on another hypothesis, also equivalent to Euclid's, [...] The importance of this latter work is that it was published in Rome in 1594 and was studied by European geometers. In particular, it became the starting point for the work of Saccheri and ultimately for the discovery of non-Euclidean geometry." "InPseudo-Tusi's Exposition of Euclid, [...] another statement is used instead of a postulate. It was independent of the Euclidean postulate V and easy to prove. [...] He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from theElements." Eder, Michelle (2000),Views of Euclid's Parallel Postulate in Ancient Greece and in Medieval Islam,Rutgers University, retrieved2008-01-23
https://en.wikipedia.org/wiki/Parallel_postulate
Aneponymis a person (real or fictitious) from whom something is said to take its name. The word is back-formed from "eponymous", from the Greek "eponymos" meaning "giving name". Here is alist of eponyms:
https://en.wikipedia.org/wiki/List_of_eponyms
Incryptography, aFeistel cipher(also known asLuby–Rackoff block cipher) is asymmetric structureused in the construction ofblock ciphers, named after theGerman-bornphysicistand cryptographerHorst Feistel, who did pioneering research while working forIBM; it is also commonly known as aFeistel network. A large number ofblock ciphersuse the scheme, including the USData Encryption Standard, the Soviet/RussianGOSTand the more recentBlowfishandTwofishciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times. Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM'sLucifercipher, designed byHorst FeistelandDon Coppersmithin 1973. Feistel networks gained respectability when the U.S. Federal Government adopted theDES(a cipher based on Lucifer, with changes made by theNSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design). A Feistel network uses around function, a function which takes two inputs – a data block and a subkey – and returns one output of the same size as the data block.[1]In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such assubstitution–permutation networksis that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible.[2]: 465[3]: 347Furthermore, theencryptionanddecryptionoperations are very similar, even identical in some cases, requiring only a reversal of thekey schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations. The structure and properties of Feistel ciphers have been extensively analyzed bycryptographers. Michael LubyandCharles Rackoffanalyzed the Feistel cipher construction and proved that if the round function is a cryptographically securepseudorandom function, withKiused as the seed, then 3 rounds are sufficient to make the block cipher apseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who getsoracleaccess to its inverse permutation).[4]Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers. Further theoretical work has generalized the construction somewhat and given more precise bounds for security.[5][6] LetF{\displaystyle \mathrm {F} }be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces: (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}}). For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute where⊕{\displaystyle \oplus }meansXOR. Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}. Decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0} Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again. The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption. Unbalanced Feistel ciphers use a modified structure whereL0{\displaystyle L_{0}}andR0{\displaystyle R_{0}}are not of equal lengths.[7]TheSkipjackcipher is an example of such a cipher. TheTexas Instrumentsdigital signature transponderuses a proprietary unbalanced Feistel cipher to performchallenge–response authentication.[8] TheThorp shuffleis an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.[9] The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, theoptimal asymmetric encryption padding(OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certainasymmetric-key encryptionschemes. A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (seeformat-preserving encryption).[9] Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example,MISTY1is a Feistel cipher using a three-round Feistel network in its round function,Skipjackis a modified Feistel cipher using a Feistel network in its G permutation, andThreefish(part ofSkein) is a non-Feistel block cipher that uses a Feistel-like MIX function. Feistel or modified Feistel: Generalised Feistel:
https://en.wikipedia.org/wiki/Feistel_scheme
In themathematicalfield ofrepresentation theory,group representationsdescribe abstractgroupsin terms ofbijectivelinear transformationsof avector spaceto itself (i.e. vector spaceautomorphisms); in particular, they can be used to represent group elements asinvertible matricesso that the group operation can be represented bymatrix multiplication. In chemistry, a group representation can relate mathematical group elements to symmetric rotations and reflections of molecules. Representations of groups allow manygroup-theoreticproblems to be reduced to problems inlinear algebra. Inphysics, they describe how thesymmetry groupof a physical system affects the solutions of equations describing that system. The termrepresentation of a groupis also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means ahomomorphismfrom the group to theautomorphism groupof an object. If the object is a vector space we have alinear representation. Some people userealizationfor the general notion and reserve the termrepresentationfor the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations. The representation theory of groups divides into subtheories depending on the kind of group being represented. The various theories are quite different in detail, though some basic definitions and concepts are similar. The most important divisions are: Representation theory also depends heavily on the type ofvector spaceon which the group acts. One distinguishes between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (e.g. whether or not the space is aHilbert space,Banach space, etc.). One must also consider the type offieldover which the vector space is defined. The most important case is the field ofcomplex numbers. The other important cases are the field ofreal numbers,finite fields, and fields ofp-adic numbers. In general,algebraically closedfields are easier to handle than non-algebraically closed ones. Thecharacteristicof the field is also significant; many theorems for finite groups depend on the characteristic of the field not dividing theorder of the group. Arepresentationof agroupGon avector spaceVover afieldKis agroup homomorphismfromGto GL(V), thegeneral linear grouponV. That is, a representation is a map such that HereVis called therepresentation spaceand the dimension ofVis called thedimensionordegreeof the representation. It is common practice to refer toVitself as the representation when the homomorphism is clear from the context. In the case whereVis of finite dimensionnit is common to choose abasisforVand identify GL(V) withGL(n,K), the group ofn×n{\displaystyle n\times n}invertible matriceson the fieldK. Consider the complex numberu= e2πi / 3which has the propertyu3= 1. The setC3= {1,u,u2} forms acyclic groupunder multiplication. This group has a representation ρ onC2{\displaystyle \mathbb {C} ^{2}}given by: This representation is faithful because ρ is aone-to-one map. Another representation forC3onC2{\displaystyle \mathbb {C} ^{2}}, isomorphic to the previous one, is σ given by: The groupC3may also be faithfully represented onR2{\displaystyle \mathbb {R} ^{2}}by τ given by: where A possible representation onR3{\displaystyle \mathbb {R} ^{3}}is given by the set of cyclic permutation matricesv: Another example: LetV{\displaystyle V}be the space of homogeneous degree-3 polynomials over the complex numbers in variablesx1,x2,x3.{\displaystyle x_{1},x_{2},x_{3}.} ThenS3{\displaystyle S_{3}}acts onV{\displaystyle V}by permutation of the three variables. For instance,(12){\displaystyle (12)}sendsx13{\displaystyle x_{1}^{3}}tox23{\displaystyle x_{2}^{3}}. A subspaceWofVthat is invariant under thegroup actionis called asubrepresentation. IfVhas exactly two subrepresentations, namely the zero-dimensional subspace andVitself, then the representation is said to beirreducible; if it has a proper subrepresentation of nonzero dimension, the representation is said to bereducible. The representation of dimension zero is considered to be neither reducible nor irreducible,[1]just as the number 1 is considered to be neithercompositenorprime. Under the assumption that thecharacteristicof the fieldKdoes not divide the size of the group, representations offinite groupscan be decomposed into adirect sumof irreducible subrepresentations (seeMaschke's theorem). This holds in particular for any representation of a finite group over thecomplex numbers, since the characteristic of the complex numbers is zero, which never divides the size of a group. In the example above, the first two representations given (ρ and σ) are both decomposable into two 1-dimensional subrepresentations (given by span{(1,0)} and span{(0,1)}), while the third representation (τ) is irreducible. Aset-theoretic representation(also known as a group action orpermutation representation) of agroupGon asetXis given by afunctionρ :G→XX, the set of functions fromXtoX, such that for allg1,g2inGand allxinX: where1{\displaystyle 1}is the identity element ofG. This condition and the axioms for a group imply that ρ(g) is abijection(orpermutation) for allginG. Thus we may equivalently define a permutation representation to be agroup homomorphismfrom G to thesymmetric groupSXofX. For more information on this topic see the article ongroup action. Every groupGcan be viewed as acategorywith a single object;morphismsin this category are just the elements ofG. Given an arbitrary categoryC, arepresentationofGinCis afunctorfromGtoC. Such a functor selects an objectXinCand a group homomorphism fromGto Aut(X), theautomorphism groupofX. In the case whereCisVectK, thecategory of vector spacesover a fieldK, this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation ofGin thecategory of sets. WhenCisAb, thecategory of abelian groups, the objects obtained are calledG-modules. For another example consider thecategory of topological spaces,Top. Representations inTopare homomorphisms fromGto thehomeomorphismgroup of a topological spaceX. Two types of representations closely related to linear representations are:
https://en.wikipedia.org/wiki/Representation_theory_(group_theory)
Incryptography, apre-shared key(PSK) is ashared secretwhich was previously shared between the two parties using somesecure channelbefore it needs to be used.[1] To build a key from shared secret, thekey derivation functionis typically used. Such systems almost always usesymmetric keycryptographic algorithms. The term PSK is used inWi-Fiencryption such asWired Equivalent Privacy(WEP),Wi-Fi Protected Access(WPA), where the method is called WPA-PSK or WPA2-PSK, and also in theExtensible Authentication Protocol(EAP), where it is known asEAP-PSK. In all these cases, both thewireless access points(AP) and all clientssharethe same key.[2] The characteristics of this secret or key are determined by the system which uses it; some system designs require that such keys be in a particular format. It can be apassword, apassphrase, or ahexadecimalstring. The secret is used by all systems involved in the cryptographic processes used to secure the traffic between the systems. Crypto systemsrely on one or more keys for confidentiality. One particular attack is always possible against keys, thebrute force key space search attack. A sufficiently long, randomly chosen, key canresistany practical brute force attack, though not in principle if an attacker has sufficient computational power (seepassword strengthandpassword crackingfor more discussion). Unavoidably, however, pre-shared keys are held by both parties to the communication, and so can be compromised at one end, without the knowledge of anyone at the other. There are several tools available to help one choose strong passwords, though doing so over anynetworkconnection is inherently unsafe as one cannot in general know who, if anyone, may be eavesdropping on the interaction. Choosing keys used by cryptographic algorithms is somewhat different in that any pattern whatsoever should be avoided, as any such pattern may provide an attacker with a lower effort attack than brute force search. This impliesrandomkey choice to force attackers to spend as much effort as possible; this is very difficult in principle and in practice as well. As a general rule, any software except acryptographically secure pseudorandom number generator(CSPRNG) should be avoided.
https://en.wikipedia.org/wiki/Pre-shared_key
Inmathematics, asetBof elements of avector spaceVis called abasis(pl.:bases) if every element ofVcan be written in a unique way as a finitelinear combinationof elements ofB. The coefficients of this linear combination are referred to ascomponentsorcoordinatesof the vector with respect toB. The elements of a basis are calledbasis vectors. Equivalently, a setBis a basis if its elements arelinearly independentand every element ofVis alinear combinationof elements ofB.[1]In other words, a basis is a linearly independentspanning set. A vector space can have several bases; however all the bases have the same number of elements, called thedimensionof the vector space. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Basis vectors find applications in the study ofcrystal structuresandframes of reference. AbasisBof avector spaceVover afieldF(such as thereal numbersRor thecomplex numbersC) is a linearly independentsubsetofVthatspansV. This means that a subsetBofVis a basis if it satisfies the two following conditions: Thescalarsai{\displaystyle a_{i}}are called the coordinates of the vectorvwith respect to the basisB, and by the first property they are uniquely determined. A vector space that has afinitebasis is calledfinite-dimensional. In this case, the finite subset can be taken asBitself to check for linear independence in the above definition. It is often convenient or even necessary to have anorderingon the basis vectors, for example, when discussingorientation, or when one considers the scalar coefficients of a vector with respect to a basis without referring explicitly to the basis elements. In this case, the ordering is necessary for associating each coefficient to the corresponding basis element. This ordering can be done by numbering the basis elements. In order to emphasize that an order has been chosen, one speaks of anordered basis, which is therefore not simply an unstructuredset, but asequence, anindexed family, or similar; see§ Ordered bases and coordinatesbelow. The setR2of theordered pairsofreal numbersis a vector space under the operations of component-wise addition(a,b)+(c,d)=(a+c,b+d){\displaystyle (a,b)+(c,d)=(a+c,b+d)}and scalar multiplicationλ(a,b)=(λa,λb),{\displaystyle \lambda (a,b)=(\lambda a,\lambda b),}whereλ{\displaystyle \lambda }is any real number. A simple basis of this vector space consists of the two vectorse1= (1, 0)ande2= (0, 1). These vectors form a basis (called thestandard basis) because any vectorv= (a,b)ofR2may be uniquely written asv=ae1+be2.{\displaystyle \mathbf {v} =a\mathbf {e} _{1}+b\mathbf {e} _{2}.}Any other pair of linearly independent vectors ofR2, such as(1, 1)and(−1, 2), forms also a basis ofR2. More generally, ifFis afield, the setFn{\displaystyle F^{n}}ofn-tuplesof elements ofFis a vector space for similarly defined addition and scalar multiplication. Letei=(0,…,0,1,0,…,0){\displaystyle \mathbf {e} _{i}=(0,\ldots ,0,1,0,\ldots ,0)}be then-tuple with all components equal to 0, except theith, which is 1. Thene1,…,en{\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}}is a basis ofFn,{\displaystyle F^{n},}which is called thestandard basisofFn.{\displaystyle F^{n}.} A different flavor of example is given bypolynomial rings. IfFis a field, the collectionF[X]of allpolynomialsin oneindeterminateXwith coefficients inFis anF-vector space. One basis for this space is themonomial basisB, consisting of allmonomials:B={1,X,X2,…}.{\displaystyle B=\{1,X,X^{2},\ldots \}.}Any set of polynomials such that there is exactly one polynomial of each degree (such as theBernstein basis polynomialsorChebyshev polynomials) is also a basis. (Such a set of polynomials is called apolynomial sequence.) But there are also many bases forF[X]that are not of this form. Many properties of finite bases result from theSteinitz exchange lemma, which states that, for any vector spaceV, given a finitespanning setSand alinearly independentsetLofnelements ofV, one may replacenwell-chosen elements ofSby the elements ofLto get a spanning set containingL, having its other elements inS, and having the same number of elements asS. Most properties resulting from the Steinitz exchange lemma remain true when there is no finite spanning set, but their proofs in the infinite case generally require theaxiom of choiceor a weaker form of it, such as theultrafilter lemma. IfVis a vector space over a fieldF, then: IfVis a vector space of dimensionn, then: LetVbe a vector space of finite dimensionnover a fieldF, andB={b1,…,bn}{\displaystyle B=\{\mathbf {b} _{1},\ldots ,\mathbf {b} _{n}\}}be a basis ofV. By definition of a basis, everyvinVmay be written, in a unique way, asv=λ1b1+⋯+λnbn,{\displaystyle \mathbf {v} =\lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n},}where the coefficientsλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}are scalars (that is, elements ofF), which are called thecoordinatesofvoverB. However, if one talks of thesetof the coefficients, one loses the correspondence between coefficients and basis elements, and several vectors may have the samesetof coefficients. For example,3b1+2b2{\displaystyle 3\mathbf {b} _{1}+2\mathbf {b} _{2}}and2b1+3b2{\displaystyle 2\mathbf {b} _{1}+3\mathbf {b} _{2}}have the same set of coefficients{2, 3}, and are different. It is therefore often convenient to work with anordered basis; this is typically done byindexingthe basis elements by the first natural numbers. Then, the coordinates of a vector form asequencesimilarly indexed, and a vector is completely characterized by the sequence of coordinates. An ordered basis, especially when used in conjunction with anorigin, is also called acoordinate frameor simply aframe(for example, aCartesian frameor anaffine frame). Let, as usual,Fn{\displaystyle F^{n}}be the set of then-tuplesof elements ofF. This set is anF-vector space, with addition and scalar multiplication defined component-wise. The mapφ:(λ1,…,λn)↦λ1b1+⋯+λnbn{\displaystyle \varphi :(\lambda _{1},\ldots ,\lambda _{n})\mapsto \lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n}}is alinear isomorphismfrom the vector spaceFn{\displaystyle F^{n}}ontoV. In other words,Fn{\displaystyle F^{n}}is thecoordinate spaceofV, and then-tupleφ−1(v){\displaystyle \varphi ^{-1}(\mathbf {v} )}is thecoordinate vectorofv. Theinverse imagebyφ{\displaystyle \varphi }ofbi{\displaystyle \mathbf {b} _{i}}is then-tupleei{\displaystyle \mathbf {e} _{i}}all of whose components are 0, except theith that is 1. Theei{\displaystyle \mathbf {e} _{i}}form an ordered basis ofFn{\displaystyle F^{n}}, which is called itsstandard basisorcanonical basis. The ordered basisBis the image byφ{\displaystyle \varphi }of the canonical basis ofFn{\displaystyle F^{n}}. It follows from what precedes that every ordered basis is the image by a linear isomorphism of the canonical basis ofFn{\displaystyle F^{n}},and that every linear isomorphism fromFn{\displaystyle F^{n}}ontoVmay be defined as the isomorphism that maps the canonical basis ofFn{\displaystyle F^{n}}onto a given ordered basis ofV. In other words, it is equivalent to define an ordered basis ofV, or a linear isomorphism fromFn{\displaystyle F^{n}}ontoV. LetVbe a vector space of dimensionnover a fieldF. Given two (ordered) basesBold=(v1,…,vn){\displaystyle B_{\text{old}}=(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})}andBnew=(w1,…,wn){\displaystyle B_{\text{new}}=(\mathbf {w} _{1},\ldots ,\mathbf {w} _{n})}ofV, it is often useful to express the coordinates of a vectorxwith respect toBold{\displaystyle B_{\mathrm {old} }}in terms of the coordinates with respect toBnew.{\displaystyle B_{\mathrm {new} }.}This can be done by thechange-of-basis formula, that is described below. The subscripts "old" and "new" have been chosen because it is customary to refer toBold{\displaystyle B_{\mathrm {old} }}andBnew{\displaystyle B_{\mathrm {new} }}as theold basisand thenew basis, respectively. It is useful to describe the old coordinates in terms of the new ones, because, in general, one hasexpressionsinvolving the old coordinates, and if one wants to obtain equivalent expressions in terms of the new coordinates; this is obtained by replacing the old coordinates by their expressions in terms of the new coordinates. Typically, the new basis vectors are given by their coordinates over the old basis, that is,wj=∑i=1nai,jvi.{\displaystyle \mathbf {w} _{j}=\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}.}If(x1,…,xn){\displaystyle (x_{1},\ldots ,x_{n})}and(y1,…,yn){\displaystyle (y_{1},\ldots ,y_{n})}are the coordinates of a vectorxover the old and the new basis respectively, the change-of-basis formula isxi=∑j=1nai,jyj,{\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},}fori= 1, ...,n. This formula may be concisely written inmatrixnotation. LetAbe the matrix of theai,j{\displaystyle a_{i,j}},andX=[x1⋮xn]andY=[y1⋮yn]{\displaystyle X={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}\quad {\text{and}}\quad Y={\begin{bmatrix}y_{1}\\\vdots \\y_{n}\end{bmatrix}}}be thecolumn vectorsof the coordinates ofvin the old and the new basis respectively, then the formula for changing coordinates isX=AY.{\displaystyle X=AY.} The formula can be proven by considering the decomposition of the vectorxon the two bases: one hasx=∑i=1nxivi,{\displaystyle \mathbf {x} =\sum _{i=1}^{n}x_{i}\mathbf {v} _{i},}andx=∑j=1nyjwj=∑j=1nyj∑i=1nai,jvi=∑i=1n(∑j=1nai,jyj)vi.{\displaystyle \mathbf {x} =\sum _{j=1}^{n}y_{j}\mathbf {w} _{j}=\sum _{j=1}^{n}y_{j}\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}=\sum _{i=1}^{n}{\biggl (}\sum _{j=1}^{n}a_{i,j}y_{j}{\biggr )}\mathbf {v} _{i}.} The change-of-basis formula results then from the uniqueness of the decomposition of a vector over a basis, hereBold{\displaystyle B_{\text{old}}};that isxi=∑j=1nai,jyj,{\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},}fori= 1, ...,n. If one replaces the field occurring in the definition of a vector space by aring, one gets the definition of amodule. For modules,linear independenceandspanning setsare defined exactly as for vector spaces, although "generating set" is more commonly used than that of "spanning set". Like for vector spaces, abasisof a module is a linearly independent subset that is also a generating set. A major difference with the theory of vector spaces is that not every module has a basis. A module that has a basis is called afree module. Free modules play a fundamental role in module theory, as they may be used for describing the structure of non-free modules throughfree resolutions. A module over the integers is exactly the same thing as anabelian group. Thus a free module over the integers is also a free abelian group. Free abelian groups have specific properties that are not shared by modules over other rings. Specifically, every subgroup of a free abelian group is a free abelian group, and, ifGis a subgroup of a finitely generated free abelian groupH(that is an abelian group that has a finite basis), then there is a basise1,…,en{\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}}ofHand an integer0 ≤k≤nsuch thata1e1,…,akek{\displaystyle a_{1}\mathbf {e} _{1},\ldots ,a_{k}\mathbf {e} _{k}}is a basis ofG, for some nonzero integersa1,…,ak{\displaystyle a_{1},\ldots ,a_{k}}.For details, seeFree abelian group § Subgroups. In the context of infinite-dimensional vector spaces over the real or complex numbers, the termHamel basis(named afterGeorg Hamel[2]) oralgebraic basiscan be used to refer to a basis as defined in this article. This is to make a distinction with other notions of "basis" that exist when infinite-dimensional vector spaces are endowed with extra structure. The most important alternatives areorthogonal basesonHilbert spaces,Schauder bases, andMarkushevich basesonnormed linear spaces. In the case of the real numbersRviewed as a vector space over the fieldQof rational numbers, Hamel bases are uncountable, and have specifically thecardinalityof the continuum, which is thecardinal number2ℵ0{\displaystyle 2^{\aleph _{0}}},whereℵ0{\displaystyle \aleph _{0}}(aleph-nought) is the smallest infinite cardinal, the cardinal of the integers. The common feature of the other notions is that they permit the taking of infinite linear combinations of the basis vectors in order to generate the space. This, of course, requires that infinite sums are meaningfully defined on these spaces, as is the case fortopological vector spaces– a large class of vector spaces including e.g.Hilbert spaces,Banach spaces, orFréchet spaces. The preference of other types of bases for infinite-dimensional spaces is justified by the fact that the Hamel basis becomes "too big" in Banach spaces: IfXis an infinite-dimensional normed vector space that iscomplete(i.e.Xis aBanach space), then any Hamel basis ofXis necessarilyuncountable. This is a consequence of theBaire category theorem. The completeness as well as infinite dimension are crucial assumptions in the previous claim. Indeed, finite-dimensional spaces have by definition finite bases and there are infinite-dimensional (non-complete) normed spaces that have countable Hamel bases. Considerc00{\displaystyle c_{00}},the space of thesequencesx=(xn){\displaystyle x=(x_{n})}of real numbers that have only finitely many non-zero elements, with the norm‖x‖=supn|xn|{\textstyle \|x\|=\sup _{n}|x_{n}|}.Itsstandard basis, consisting of the sequences having only one non-zero element, which is equal to 1, is a countable Hamel basis. In the study ofFourier series, one learns that the functions{1} ∪ { sin(nx), cos(nx) :n= 1, 2, 3, ... }are an "orthogonal basis" of the (real or complex) vector space of all (real or complex valued) functions on the interval [0, 2π] that are square-integrable on this interval, i.e., functionsfsatisfying∫02π|f(x)|2dx<∞.{\displaystyle \int _{0}^{2\pi }\left|f(x)\right|^{2}\,dx<\infty .} The functions{1} ∪ { sin(nx), cos(nx) :n= 1, 2, 3, ... }are linearly independent, and every functionfthat is square-integrable on [0, 2π] is an "infinite linear combination" of them, in the sense thatlimn→∞∫02π|a0+∑k=1n(akcos⁡(kx)+bksin⁡(kx))−f(x)|2dx=0{\displaystyle \lim _{n\to \infty }\int _{0}^{2\pi }{\biggl |}a_{0}+\sum _{k=1}^{n}\left(a_{k}\cos \left(kx\right)+b_{k}\sin \left(kx\right)\right)-f(x){\biggr |}^{2}dx=0} for suitable (real or complex) coefficientsak,bk. But many[3]square-integrable functions cannot be represented asfinitelinear combinations of these basis functions, which thereforedo notcomprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are typically not useful, whereasorthonormal basesof these spaces are essential inFourier analysis. The geometric notions of anaffine space,projective space,convex set, andconehave related notions ofbasis.[4]Anaffine basisfor ann-dimensional affine space isn+1{\displaystyle n+1}points ingeneral linear position. Aprojective basisisn+2{\displaystyle n+2}points in general position, in a projective space of dimensionn. Aconvex basisof apolytopeis the set of the vertices of itsconvex hull. Acone basis[5]consists of one point by edge of a polygonal cone. See also aHilbert basis (linear programming). For aprobability distributioninRnwith aprobability density function, such as the equidistribution in ann-dimensional ball with respect to Lebesgue measure, it can be shown thatnrandomly and independently chosen vectors will form a basiswith probability one, which is due to the fact thatnlinearly dependent vectorsx1, ...,xninRnshould satisfy the equationdet[x1⋯xn] = 0(zero determinant of the matrix with columnsxi), and the set of zeros of a non-trivial polynomial has zero measure. This observation has led to techniques for approximating random bases.[6][7] It is difficult to check numerically the linear dependence or exact orthogonality. Therefore, the notion of ε-orthogonality is used. Forspaces with inner product,xis ε-orthogonal toyif|⟨x,y⟩|/(‖x‖‖y‖)<ε{\displaystyle \left|\left\langle x,y\right\rangle \right|/\left(\left\|x\right\|\left\|y\right\|\right)<\varepsilon }(that is, cosine of the angle betweenxandyis less thanε). In high dimensions, two independent random vectors are with high probability almost orthogonal, and the number of independent random vectors, which all are with given high probability pairwise almost orthogonal, grows exponentially with dimension. More precisely, consider equidistribution inn-dimensional ball. ChooseNindependent random vectors from a ball (they areindependent and identically distributed). Letθbe a small positive number. Then for Nrandom vectors are all pairwise ε-orthogonal with probability1 −θ.[7]ThisNgrowth exponentially with dimensionnandN≫n{\displaystyle N\gg n}for sufficiently bign. This property of random bases is a manifestation of the so-calledmeasure concentration phenomenon.[8] The figure (right) illustrates distribution of lengths N of pairwise almost orthogonal chains of vectors that are independently randomly sampled from then-dimensional cube[−1, 1]nas a function of dimension,n. A point is first randomly selected in the cube. The second point is randomly chosen in the same cube. If the angle between the vectors was withinπ/2 ± 0.037π/2then the vector was retained. At the next step a new vector is generated in the same hypercube, and its angles with the previously generated vectors are evaluated. If these angles are withinπ/2 ± 0.037π/2then the vector is retained. The process is repeated until the chain of almost orthogonality breaks, and the number of such pairwise almost orthogonal vectors (length of the chain) is recorded. For eachn, 20 pairwise almost orthogonal chains were constructed numerically for each dimension. Distribution of the length of these chains is presented. LetVbe any vector space over some fieldF. LetXbe the set of all linearly independent subsets ofV. The setXis nonempty since the empty set is an independent subset ofV, and it ispartially orderedby inclusion, which is denoted, as usual, by⊆. LetYbe a subset ofXthat is totally ordered by⊆, and letLYbe the union of all the elements ofY(which are themselves certain subsets ofV). Since(Y, ⊆)is totally ordered, every finite subset ofLYis a subset of an element ofY, which is a linearly independent subset ofV, and henceLYis linearly independent. ThusLYis an element ofX. Therefore,LYis an upper bound forYin(X, ⊆): it is an element ofX, that contains every element ofY. AsXis nonempty, and every totally ordered subset of(X, ⊆)has an upper bound inX,Zorn's lemmaasserts thatXhas a maximal element. In other words, there exists some elementLmaxofXsatisfying the condition that wheneverLmax⊆ Lfor some elementLofX, thenL = Lmax. It remains to prove thatLmaxis a basis ofV. SinceLmaxbelongs toX, we already know thatLmaxis a linearly independent subset ofV. If there were some vectorwofVthat is not in the span ofLmax, thenwwould not be an element ofLmaxeither. LetLw= Lmax∪ {w}. This set is an element ofX, that is, it is a linearly independent subset ofV(becausewis not in the span ofLmax, andLmaxis independent). AsLmax⊆ Lw, andLmax≠ Lw(becauseLwcontains the vectorwthat is not contained inLmax), this contradicts the maximality ofLmax. Thus this shows thatLmaxspansV. HenceLmaxis linearly independent and spansV. It is thus a basis ofV, and this proves that every vector space has a basis. This proof relies on Zorn's lemma, which is equivalent to theaxiom of choice. Conversely, it has been proved that if every vector space has a basis, then the axiom of choice is true.[9]Thus the two assertions are equivalent.
https://en.wikipedia.org/wiki/Basis_vector
Amode of transportis a method or way of travelling, or of transporting people or cargo.[1]The different modes of transport includeair,water, andland transport, which includesrails or railways,roadandoff-road transport. Other modes of transport also exist, includingpipelines,cable transport, andspace transport.Human-powered transportandanimal-powered transportare sometimes regarded as distinct modes, but they may lie in other categories such as land or water transport. In general,transportationrefers to the moving of people, animals, and other goods from one place to another, andmeans of transportrefers to the transport facilities used to carry people or cargo according to the chosen mode. Examples of the means of transport include automobile, airplane, ship, truck, and train. Each mode of transport has a fundamentally different set of technological solutions. Each mode has its owninfrastructure,vehicles, transport operators and operations. Animal-powered transport is the use ofworking animalsfor the transport of people and/or goods. Humans may use some of the animals directly, use them aspack animalsfor carrying goods, or harness them, alone or inteams, to pull watercraft,sleds, or wheeledvehicles. Afixed-wing aircraft, typicallyairplane, is a heavier-than-air flying vehicle, in which the special geometry of the wings generates lift and then lifts the whole vehicle. Fixed-wing aircraft range from small trainers and recreational aircraft to largeairlinersand militarycargo aircraft. For short distances or in places without runways,helicopterscan be operable.[2](Other types of aircraft, likeautogyrosandairships, are not a significant portion of air transport.) Air transport is one of the fastest method of transport, Commercial jets reach speeds of up to 955 kilometres per hour (593 mph) and a considerably higher ground speed if there is ajet streamtailwind, while piston-powered general aviation aircraft may reach up to 555 kilometres per hour (345 mph) or more. This celerity comes with higher cost and energy use,[3]andaviation's impacts to the environmentand particularly the global climate require consideration when comparing modes of transportation.[4]TheIntergovernmental Panel on Climate Change(IPCC) estimates a commercial jet's flight to have some 2-4 times the effect on the climate than if the same CO2emissions were made at ground level, because of different atmospheric chemistry andradiative forcingeffects at the higher altitude.[5]U.S. airlines alone burned about 16.2 billion gallons of fuel during the twelve months between October 2013 and September 2014.[6]WHO estimates that globally as many as 500,000 people at a time are on planes.[3]The global trend has been for increasing numbers of people to travel by air, and individually to do so with increasing frequency and over longer distances, a dilemma that has the attention of climate scientists and other researchers,[7][8][9]along with the press.[10][11]The issue of impacts from frequent travel, particularly by air because of the long distances that are easily covered in one or a few days, is calledhypermobilityand has been a topic of research and governmental concern for many years. Human powered transport, a form ofsustainable transportation, is the transport of people and/or goods usinghumanmuscle-power, in the form ofwalking,runningandswimming. Moderntechnologyhas allowedmachinesto enhance human power. Human-powered transport remains popular for reasons of cost-saving,leisure,physical exercise, andenvironmentalism; it is sometimes the only type available, especially in underdeveloped or inaccessible regions. Although humans are able to walk without infrastructure, the transport can be enhanced through the use of roads, especially when using the human power with vehicles, such asbicyclesandinline skates. Human-powered vehicles have also been developed for difficult environments, such as snow and water, bywatercraft rowingandskiing; even the air can be entered withhuman-powered aircraft. Land transport covers all land-based transportation systems that provide for the movement of people, goods and services. Land transport plays a vital role in linking communities to each other. Land transport is a key factor inurban planning. It consists of 2 kinds, rail and road. Rail transportis a means of conveyance of passengers and goods by way of wheeled vehicles running onrail track, known as a railway or railroad. The rails are anchored perpendicular to railroad trainconsistsof one or more connected vehicles that run on the rails. Propulsion is commonly provided by alocomotive, that hauls a series of unpowered cars, that can carry passengers or freight. The locomotive can be powered bysteam,dieselor byelectricitysupplied bytrackside systems. Alternatively, some or all the cars can be powered, known as amultiple unit. Also, a train can be powered byhorses,cables, gravity,pneumaticsandgas turbines. Railed vehicles move with much less friction than rubber tires on paved roads, making trains moreenergy efficient, though not as efficient as ships. Intercity trainsare long-haul services connecting cities;[12]modernhigh-speed railis capable of speeds up to 430 km/h (270 mph), but this requires a specially built track.Regionalandcommutertrains feed cities from suburbs and surrounding areas, while intra-urban transport is performed by high-capacitytramwaysandrapid transits, often making up the backbone of a city'spublic transport. Freight trains traditionally usedbox cars, requiring manual loading and unloading of thecargo. Since the 1960s, container trains have become the dominant solution for general freight, while large quantities of bulk are transported by dedicated trains. A road is an identifiable route of travel, usually surfaced with gravel, asphalt or concrete, and supporting land passage by foot or by a number of vehicles. The most common road vehicle in the developed world is theautomobile, a wheeled passenger vehicle that carries its ownmotor. As of 2002, there were 591 million automobiles worldwide.[citation needed]Other users of roads includemotorcycles,buses,trucks,bicyclesandpedestrians, and special provisions are sometimes made for each of these. For example,bus lanesgive priority for public transport, andcycle lanesprovide special areas of road for bicycles to use. Automobiles offer high flexibility, but are deemed with high energy and area use, and the main source ofnoiseandair pollutionin cities; buses allow for more efficient travel at the cost of reduced flexibility.[13]Road transport by truck is often the initial and final stage of freight transport. Water transport is the process of transport that awatercraft, such as a bart, ship or sailboat, makes over a body of water, such as a sea, ocean, lake, canal, or river. If a boat or other vessel can successfully pass through a waterway it is known as a navigable waterway. The need for buoyancy unites watercraft, and makes thehulla dominant aspect of its construction, maintenance and appearance. When a boat is floating on the water the hull of the boat is pushing aside water where the hull now is, this is known as displacement. In the 1800s, the firststeamboatswere developed, using asteam engineto drive apaddle wheelor propeller to move the ship. The steam was produced using wood or coal. Now, most ships have an engine using a slightly refined type of petroleum calledbunker fuel. Some ships, such assubmarines, use nuclear power to produce the steam. Recreational or educational craft still use wind power, while some smaller craft useinternal combustion enginesto drive one or more propellers, or in the case of jet boats, an inboard water jet. In shallow draft areas,hovercraftare propelled by large pusher-prop fans. Although slow, modern sea transport is a highly effective method of transporting large quantities of non-perishable goods. Commercial vessels, nearly 35,000 in number, carried 7.4 billion tons of cargo in 2007.[14]Transport by water is significantly less costly than air transport for transcontinentalshipping;[15]short sea shippingand ferries remain viable in coastal areas.[16][17] Micromobilityis the collective name for small electric powered vehicles. Pipeline transportsends goods through a pipe, most commonly liquid and gases are sent, butpneumatic tubescan also send solid capsules using compressed air. For example, liquids/gases, any chemically stable liquid or gas can be sent through a pipeline. Short-distance systems exist for sewage, slurry water and beer, while long-distance networks are used for petroleum and natural gas. Cable transportis a broad mode where vehicles are pulled by cables instead of an internal power source. It is most commonly used at steep gradient. Typical solutions includeaerial tramway,elevators,escalatorandski lifts; some of these are also categorized asconveyortransport. Space transportis transport out of Earth's atmosphere into outer space by means of aspacecraft. While large amounts of research have gone into technology, it is rarely used except to put satellites into orbit, and conduct scientific experiments. However, people have landed on the moon, and probes have been sent to all the planets of the Solar System. Unmanned aerial vehicletransport (drone transport) is being used for medicine transportation in least developed countries with inadequate infrastructure by an American-based start-up Zipline.[18]Amazon.comand other transportation companies are currently testing the use of unmanned aerial vehicles in parcel delivery. This method will allow short-range small-parcel delivery in a short time frame. A transport mode is a combination of the following: Worldwide, the most widely used modes for passenger transport are the Automobile (16,000 bn passenger km), followed by Buses (7,000), Air (2,800), Railways (1,900), and Urban Rail (250).[19] The most widely used modes for freight transport are Sea (40,000 bn ton km), followed by Road (7,000), Railways (6,500), Oil pipelines (2,000) and Inland Navigation (1,500).[19]
https://en.wikipedia.org/wiki/Mode_of_transport
Soft independent modelling by class analogy(SIMCA) is astatisticalmethod forsupervised classificationof data. The method requires atraining data setconsisting of samples (or objects) with a set of attributes and their class membership. The term soft refers to the fact the classifier can identify samples as belonging to multiple classes and not necessarily producing a classification of samples into non-overlapping classes. In order to build the classification models, the samples belonging to each class need to be analysed usingprincipal component analysis(PCA); only the significant components are retained. For a given class, the resulting model then describes either a line (for one Principal Component or PC), plane (for two PCs) orhyper-plane(for more than two PCs). For each modelled class, the meanorthogonal distanceof training data samples from the line, plane, or hyper-plane (calculated as the residual standard deviation) is used to determine a critical distance for classification. This critical distance is based on theF-distributionand is usually calculated using 95% or 99% confidence intervals. New observations are projected into each PC model and the residual distances calculated. An observation is assigned to the model class when its residual distance from the model is below the statistical limit for the class. The observation may be found to belong to multiple classes and a measure ofgoodness of the modelcan be found from the number of cases where the observations are classified into multiple classes. The classification efficiency is usually indicated byReceiver operating characteristics. In the original SIMCA method, the ends of the hyper-plane of each class are closed off by setting statistical control limits along the retained principal components axes (i.e., score value between plus and minus 0.5 times score standard deviation). More recent adaptations of the SIMCA method close off the hyper-plane by construction of ellipsoids (e.g.Hotelling's T2orMahalanobis distance). With such modified SIMCA methods, classification of an object requires both that its orthogonal distance from the model and its projection within the model (i.e. score value within the region defined by the ellipsoid) are not significant. SIMCA as a method of classification has gained widespread use especially in applied statistical fields such aschemometricsand spectroscopic data analysis.
https://en.wikipedia.org/wiki/Soft_independent_modelling_of_class_analogies
Structured data analysisis thestatistical data analysisof structured data. This can arise either in the form of ana prioristructure such as multiple-choice questionnaires or in situations with the need to search forstructurethat fits the given data, either exactly or approximately. This structure can then be used for making comparisons, predictions, manipulations etc.[1][2] Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Structured_data_analysis_(statistics)
Adatabase indexis adata structurethat improves the speed of data retrieval operations on adatabase tableat the cost of additional writes and storage space to maintain the index data structure. Indexes are used to quickly locate data without having to search every row in a database table every time said table is accessed. Indexes can be created using one or morecolumns of a database table, providing the basis for both rapid randomlookupsand efficient access of ordered records. An index is a copy of selected columns of data, from a table, that is designed to enable very efficient search. An index normally includes a "key" or direct link to the original row of data from which it was copied, to allow the complete row to be retrieved efficiently. Some databases extend the power of indexing by letting developers create indexes on column values that have been transformed by functions orexpressions. For example, an index could be created onupper(last_name), which would only store the upper-case versions of thelast_namefield in the index. Another option sometimes supported is the use ofpartial index, where index entries are created only for those records that satisfy some conditional expression. A further aspect of flexibility is to permit indexing onuser-defined functions, as well as expressions formed from an assortment of built-in functions. Mostdatabasesoftware includes indexing technology that enablessub-linear timelookupto improve performance, aslinear searchis inefficient for large databases. Suppose a database contains N data items and one must be retrieved based on the value of one of the fields. A simple implementation retrieves and examines each item according to the test. If there is only one matching item, this can stop when it finds that single item, but if there are multiple matches, it must test everything. This means that the number of operations in the average case isO(N) orlinear time. Since databases may contain many objects, and since lookup is a common operation, it is often desirable to improve performance. An index is any data structure that improves the performance of lookup. There are many differentdata structuresused for this purpose. There are complex design trade-offs involving lookup performance, index size, and index-update performance. Many index designs exhibit logarithmic (O(log(N))) lookup performance and in some applications it is possible to achieve flat (O(1)) performance. Indexes are used to policedatabase constraints, such as UNIQUE, EXCLUSION,PRIMARY KEYandFOREIGN KEY. An index may be declared as UNIQUE, which creates an implicit constraint on the underlying table. Database systems usually implicitly create an index on a set of columns declared PRIMARY KEY, and some are capable of using an already-existing index to police this constraint. Many database systems require that both referencing and referenced sets of columns in a FOREIGN KEY constraint are indexed, thus improving performance of inserts, updates and deletes to the tables participating in the constraint. Some database systems support an EXCLUSION constraint that ensures that, for a newly inserted or updated record, a certain predicate holds for no other record. This can be used to implement a UNIQUE constraint (with equality predicate) or more complex constraints, like ensuring that no overlapping time ranges or no intersecting geometry objects would be stored in the table. An index supporting fast searching for records satisfying the predicate is required to police such a constraint.[1] The data is present in arbitrary order, but thelogical orderingis specified by the index. The data rows may be spread throughout the table regardless of the value of the indexed column or expression. The non-clustered index tree contains the index keys in sorted order, with the leaf level of the index containing the pointer to the record (page and the row number in the data page in page-organized engines; row offset in file-organized engines). In a non-clustered index, There can be more than one non-clustered index on a database table. Clustering alters the data block into a certain distinct order to match the index, resulting in the row data being stored in order. Therefore, only one clustered index can be created on a given database table. Clustered indices can greatly increase overall speed of retrieval, but usually only where the data is accessed sequentially in the same or reverse order of the clustered index, or when a range of items is selected. Since the physical records are in this sort order on disk, the next row item in the sequence is immediately before or after the last one, and so fewer data block reads are required. The primary feature of a clustered index is therefore the ordering of the physical data rows in accordance with the index blocks that point to them. Some databases separate the data and index blocks into separate files, others put two completely different data blocks within the same physical file(s). When multiple databases and multiple tables are joined, it is called acluster(not to be confused with clustered index described previously). The records for the tables sharing the value of a cluster key shall be stored together in the same or nearby data blocks. This may improve the joins of these tables on the cluster key, since the matching records are stored together and less I/O is required to locate them.[2]The cluster configuration defines the data layout in the tables that are parts of the cluster. A cluster can be keyed with aB-treeindex or ahash table. The data block where the table record is stored is defined by the value of the cluster key. The order that the index definition defines the columns in is important. It is possible to retrieve a set of row identifiers using only the first indexed column. However, it is not possible or efficient (on most databases) to retrieve the set of row identifiers using only the second or greater indexed column. For example, in a phone book organized by city first, then by last name, and then by first name, in a particular city, one can easily extract the list of all phone numbers. However, it would be very tedious to find all the phone numbers for a particular last name. One would have to look within each city's section for the entries with that last name. Some databases can do this, others just won't use the index. In the phone book example with acomposite indexcreated on the columns (city, last_name, first_name), if we search by giving exact values for all the three fields, search time is minimal—but if we provide the values forcityandfirst_nameonly, the search uses only thecityfield to retrieve all matched records. Then a sequential lookup checks the matching withfirst_name. So, to improve the performance, one must ensure that the index is created on the order of search columns. Indexes are useful for many applications but come with some limitations. Consider the followingSQLstatement:SELECTfirst_nameFROMpeopleWHERElast_name='Smith';. To process this statement without an index the database software must look at the last_name column on every row in the table (this is known as afull table scan). With an index the database simply follows the index data structure (typically aB-tree) until the Smith entry has been found; this is much less computationally expensive than a full table scan. Consider this SQL statement:SELECTemail_addressFROMcustomersWHEREemail_addressLIKE'%@wikipedia.org';. This query would yield an email address for every customer whose email address ends with "@wikipedia.org", but even if the email_address column has been indexed the database must perform a full index scan. This is because the index is built with the assumption that words go from left to right. With awildcardat the beginning of the search-term, the database software is unable to use the underlying index data structure (in other words, the WHERE-clause isnotsargable). This problem can be solved through the addition of another index created onreverse(email_address)and a SQL query like this:SELECTemail_addressFROMcustomersWHEREreverse(email_address)LIKEreverse('%@wikipedia.org');. This puts the wild-card at the right-most part of the query (nowgro.aidepikiw@%), which the index on reverse(email_address) can satisfy. When the wildcard characters are used on both sides of the search word as%wikipedia.org%, the index available on this field is not used. Rather only a sequential search is performed, which takes⁠O(N){\displaystyle O(N)}⁠time. A bitmap index is a special kind of indexing that stores the bulk of its data asbit arrays(bitmaps) and answers most queries by performingbitwise logical operationson these bitmaps. The most commonly used indexes, such asB+ trees, are most efficient if the values they index do not repeat or repeat a small number of times. In contrast, the bitmap index is designed for cases where the values of a variable repeat very frequently. For example, the sex field in a customer database usually contains at most three distinct values: male, female or unknown (not recorded). For such variables, the bitmap index can have a significant performance advantage over the commonly used trees. A dense index indatabasesis afilewith pairs of keys andpointersfor everyrecordin the data file. Every key in this file is associated with a particular pointer toa recordin the sorted data file. In clustered indices with duplicate keys, the dense index pointsto the first recordwith that key.[3] A sparse index in databases is a file with pairs of keys and pointers for everyblockin the data file. Every key in this file is associated with a particular pointerto the blockin the sorted data file. In clustered indices with duplicate keys, the sparse index pointsto the lowest search keyin each block. A reverse-key index reverses the key value before entering it in the index. E.g., the value 24538 becomes 83542 in the index. Reversing the key value is particularly useful for indexing data such as sequence numbers, where new key values monotonically increase. An inverted index maps a content word to the document containing it, thereby allowing full-text searches. The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database. It is used to index fields that are neither ordering fields nor key fields (there is no assurance that the file is organized on key field or primary key field). One index entry for every tuple in the data file (dense index) contains the value of the indexed attribute and pointer to the block or record. A hash index in database is most commonly used index in data management. It is created on a column that contains unique values, such as a primary key or email address. Another type of index used in database systems islinear hashing. Indices can be implemented using a variety of data structures. Popular indices includebalanced trees,B+ treesandhashes.[4] InMicrosoft SQL Server, theleaf nodeof the clustered index corresponds to the actual data, not simply a pointer to data that resides elsewhere, as is the case with a non-clustered index.[5]Each relation can have a single clustered index and many unclustered indices.[6] An index is typically being accessed concurrently by several transactions and processes, and thus needsconcurrency control. While in principle indexes can utilize the common database concurrency control methods, specialized concurrency control methods for indexes exist, which are applied in conjunction with the common methods for a substantial performance gain. In most cases, an index is used to quickly locate the data records from which the required data is read. In other words, the index is only used to locate data records in the table and not to return data. A covering index is a special case where the index itself contains the required data fields and can answer the required data. Consider the following table (other fields omitted): To find the Name for ID 13, an index on (ID) is useful, but the record must still be read to get the Name. However, an index on (ID, Name) contains the required data field and eliminates the need to look up the record. Covering indexes are each for a specific table. Queries which JOIN/ access across multiple tables, may potentially consider covering indexes on more than one of these tables.[7] A covering index can dramatically speed up data retrieval but may itself be large due to the additional keys, which slow down data insertion and update. To reduce such index size, some systems allow including non-key fields in the index. Non-key fields are not themselves part of the index ordering but only included at the leaf level, allowing for a covering index with less overall index size. This can be done in SQL withCREATEINDEXmy_indexONmy_table(id)INCLUDE(name);.[8][9] No standard defines how to create indexes, because the ISO SQL Standard does not cover physical aspects.Indexes are one of the physical parts of database conception among others like storage (tablespace or filegroups).[clarify]RDBMS vendors all give aCREATEINDEXsyntax with some specific options that depend on their software's capabilities.
https://en.wikipedia.org/wiki/Database_index
Concept miningis an activity that results in the extraction ofconceptsfromartifacts. Solutions to the task typically involve aspects ofartificial intelligenceandstatistics, such asdata miningandtext mining.[1][2]Because artifacts are typically a loosely structured sequence of words and other symbols (rather than concepts), the problem isnontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents. Traditionally, the conversion of words to concepts has been performed using athesaurus,[3]and for computational techniques the tendency is to do the same. The thesauri used are either specially created for the task, or a pre-existing language model, usually related to Princeton'sWordNet. The mappings of words to concepts[4]are oftenambiguous. Typically each word in a given language will relate to several possible concepts. Humans use context to disambiguate the various meanings of a given piece of text, where availablemachine translationsystems cannot easily infer context. For the purposes of concept mining, however, these ambiguities tend to be less important than they are with machine translation, for in large documents the ambiguities tend to even out, much as is the case with text mining. There are many techniques fordisambiguationthat may be used. Examples are linguistic analysis of the text and the use of word and concept association frequency information that may be inferred from large text corpora. Recently, techniques that base onsemantic similaritybetween the possible concepts and the context have appeared and gained interest in the scientific community. One of the spin-offs of calculating document statistics in the concept domain, rather than the word domain, is that concepts form natural tree structures based onhypernymyandmeronymy. These structures can be used to generate simple tree membership statistics, that can be used to locate any document in aEuclidean concept space. If the size of a document is also considered as another dimension of this space then an extremely efficient indexing system can be created. This technique is currently in commercial use locating similar legal documents in a 2.5 million document corpus. Standard numeric clustering techniques may be used in "concept space" as described above to locate and index documents by the inferred topic. These are numerically far more efficient than theirtext miningcousins, and tend to behave more intuitively, in that they map better to the similarity measures a human would generate.
https://en.wikipedia.org/wiki/Concept_mining
Atelecommandortelecontrolis acommandsent to control a remote system or systems not directly connected (e.g. via wires) to the place from which the telecommand is sent. The word is derived fromtele= remote (Greek), andcommand= to entrust/order (Latin). Systems that need remote measurement and reporting of information of interest to the system designer or operator require the counterpart of telecommand,telemetry. Thetelecommandcan be done inreal timeor not depending on the circumstances (in space, delay may be of days), as was the case ofMarsokhod.[1] For a Telecommand (TC) to be effective, it must be compiled into a pre-arranged format (which may follow a standard structure), modulated onto a carrier wave which is then transmitted with adequate power to the remote system. The remote system will then demodulate the digital signal from the carrier, decode the TC, and execute it. Transmission of the carrier wave can be by ultrasound, infra-red or other electromagnetic means. Infraredlight makes up the invisible section of theelectromagnetic spectrum.[2]This light, also classified as heat, transmits signals between the transmitter and receiver of the remote system.[2]Telecommand systems usually include a physical remote, which contains four key parts: buttons,integrated circuit, button contacts, and alight-emitting diode.[3]When the buttons on a remote are pressed they touch and close their corresponding contacts below them within the remote.[3]This completes the necessary circuit on the circuit board along with a change inelectrical resistance, which is detected by the integrated circuit. Based on the change in electrical resistance, the integrated circuit distinguishes which button was pushed and sends a correspondingbinary codeto the light-emitting diode (LED) usually located at the front of the remote.[3]To transfer the information from the remote to the receiver, the LED turns the electrical signals into an invisible beam of infrared light that corresponds with the binary code and sends this light to the receiver.[3]The receiver then detects the light signal via aphotodiodeand it is transformed into an electrical signal for the command and is sent to the receiver’s integrated circuit/microprocessorto process and complete the command.[3]The strength of the transmitting LED can vary and determines the required positioning accuracy of the remote in relevance to the receiver.[2]Infrared remotes have a maximum range of approximately 30 feet and require the remote control or transmitter and receiver to be within aline of sight.[2] Ultrasonic is a technology used more frequently in the past for telecommand. InventorRobert Adleris known for inventing theremote controlwhich did not require batteries and used ultrasonic technology.[4]There are four aluminum rods inside the transmitter that produce high frequency sounds when they are hit at one end. Each rod is a different length, which enables them to produce varying sound pitches, which control the receiving unit.[5]This technology was widely used but had certain issues such as dogs being bothered by the high frequency sounds.[6] Often the smaller new remote controlled airplanes and helicopters are incorrectly advertised as radio controlled devices (seeRadio control) but they are either controlled via infra-red transmission or electromagnetically guided. Both of these systems are part of the telecommand area. To prevent unauthorised access to the remote system, TCencryptionmay be employed.Secret sharingmay be used.
https://en.wikipedia.org/wiki/Telecommand
Ineconometrics, thetruncated normal hurdle modelis a variant of theTobit modeland was first proposed by Cragg in 1971.[1] In a standard Tobit model, represented asy=(xβ+u)1[xβ+u>0]{\displaystyle y=(x\beta +u)1[x\beta +u>0]}, whereu|x∼N(0,σ2){\displaystyle u|x\sim N(0,\sigma ^{2})}This model construction implicitly imposes two first order assumptions:[2] However, these two implicit assumptions are too strong and inconsistent with many contexts ineconomics. For instance, when we need to decide whether toinvestand build a factory, the constructioncostmight be more influential than the productprice; but once we have already built the factory, the product price is definitely more influential to therevenue. Hence, the implicit assumption (2) doesn't match this context.[4]The essence of this issue is that the standard Tobit implicitly models a very strong link between the participation decision(y=0{\displaystyle (y=0}ory>0){\displaystyle y>0)}and the amount decision (the magnitude ofy{\displaystyle y}wheny>0{\displaystyle y>0}). If a corner solution model is represented in a general form:y=s⋅w,{\displaystyle y=s\centerdot w,}, wheres{\displaystyle s}is the participate decision andw{\displaystyle w}is the amount decision, standard Tobit model assumes: To make the model compatible with more contexts, a natural improvement is to assume: w=xβ+e,{\displaystyle w=x\beta +e,}where the error term (e{\displaystyle e}) is distributed as a truncated normal distribution with a density asφ(⋅)/Φ(xβσ)/σ;{\displaystyle \varphi (\cdot )/\Phi \left({\frac {x\beta }{\sigma }}\right)/\sigma ;} s{\displaystyle s}andw{\displaystyle w}are independent conditional onx{\displaystyle x}. This is called Truncated Normal Hurdle Model, which is proposed in Cragg (1971).[1]By adding one more parameter and detach the amount decision with the participation decision, the model can fit more contexts. Under this model setup, thedensityof they{\displaystyle y}givenx{\displaystyle x}can be written as: From this density representation, it is obvious that it will degenerate to the standard Tobit model whenγ=β/σ.{\displaystyle \gamma =\beta /\sigma .}This also shows that Truncated Normal Hurdle Model is more general than the standard Tobit model. The Truncated Normal Hurdle Model is usually estimated through MLE. The log-likelihood function can be written as: From the log-likelihood function,γ{\displaystyle \gamma }can be estimated by aprobit modeland(β,σ){\displaystyle (\beta ,\sigma )}can be estimated by a truncated normal regression model.[5]Based on the estimates, consistent estimates for the Average Partial Effect can be estimated correspondingly.
https://en.wikipedia.org/wiki/Truncated_normal_hurdle_model
Incomputing, adata warehouse(DWorDWH), also known as anenterprise data warehouse(EDW), is a system used forreportinganddata analysisand is a core component ofbusiness intelligence.[1]Data warehouses are centralrepositoriesof data integrated from disparate sources. They store current and historical data organized in a way that is optimized for data analysis, generation of reports, and developing insights across the integrated data.[2]They are intended to be used by analysts and managers to help make organizational decisions.[3] The data stored in the warehouse isuploadedfromoperational systems(such as marketing or sales). The data may pass through anoperational data storeand may requiredata cleansingfor additional operations to ensuredata qualitybefore it is used in the data warehouse for reporting. The two main workflows for building a data warehouse system areextract, transform, load(ETL) andextract, load, transform(ELT). The environment for data warehouses and marts includes the following: Operational databases are optimized for the preservation ofdata integrityand speed of recording of business transactions through use ofdatabase normalizationand anentity–relationship model. Operational system designers generally followCodd's 12 rulesofdatabase normalizationto ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables.Relational databasesare efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected by each transaction. To improve performance, older data are periodically purged. Data warehouses are optimized for analytic access patterns, which usually involve selecting specific fields rather than all fields as is common in operational databases. Because of these differences in access, operational databases (loosely, OLTP) benefit from the use of a row-oriented database management system (DBMS), whereas analytics databases (loosely, OLAP) benefit from the use of acolumn-oriented DBMS. Operational systems maintain a snapshot of the business, while warehouses maintain historic data through ETL processes that periodically migrate data from the operational systems to the warehouse. Online analytical processing(OLAP) is characterized by a low rate of transactions and complex queries that involve aggregations. Response time is an effective performance measure of OLAP systems. OLAP applications are widely used fordata mining. OLAP databases store aggregated, historical data in multi-dimensional schemas (usuallystar schemas). OLAP systems typically have a data latency of a few hours, while data mart latency is closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are roll-up (consolidation), drill-down, and slicing & dicing. Online transaction processing(OLTP) is characterized by a large numbers of short online transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize fast query processing and maintainingdata integrityin multi-access environments. For OLTP systems, performance is the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually3NF).[citation needed]Normalization is the norm for data modeling techniques in this system. Predictive analyticsis aboutfindingand quantifying hidden patterns in the data using complex mathematical models to prepare for different future outcomes, including demand forproducts, and make better decisions. By contrast, OLAP focuses on historical data analysis and is reactive. Predictive systems are also used forcustomer relationship management(CRM). Adata martis a simple data warehouse focused on a single subject or functional area. Hence it draws data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department in an organization. The sources could be internal operational systems, a central data warehouse, or external data.[4]As with warehouses, stored data is usually not normalized. Types of data marts includedependent, independent, and hybrid data marts.[clarification needed] The typicalextract, transform, load(ETL)-based data warehouse usesstaging,data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates disparate data sets by transforming the data from the staging layer, often storing this transformed data in anoperational data store(ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and intofactsand aggregate facts. The combination of facts and dimensions is sometimes called astar schema. The access layer helps users retrieve data.[5] The main source of the data iscleansed, transformed, catalogued, and made available for use by managers and other business professionals fordata mining,online analytical processing,market researchanddecision support.[6]However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage thedata dictionaryare also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition of data warehousing includesbusiness intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrievemetadata. ELT-based data warehousing gets rid of a separateETLtool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse. A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to: The concept of data warehousing dates back to the late 1980s[7]when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems todecision support environments. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to aslegacy systems), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "data marts" that was tailored for ready access by users. Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea of managing and putting a dollar value on an organization's data resources and then reporting that value as an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area databases from data derived from transaction-driven systems to create a storage area where summary data could be further leveraged to inform executive decision-making. This concept served to promote further thinking of how a data warehouse could be developed and managed in a practical way within any enterprise. Key developments in early years of data warehousing: A fact is a value or measurement in the system being managed. Raw facts are ones reported by the reporting entity. For example, in a mobile telephone system, if abase transceiver station(BTS) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the rest, it could report three facts to a management system: Raw facts are aggregated to higher levels in variousdimensionsto extract information more relevant to the service or business. These are called aggregated facts or summaries. For example, if there are three BTSs in a city, then the facts above can be aggregated to the city level in the network dimension. For example: The two most important approaches to store data in a warehouse are dimensional and normalized. The dimensional approach uses astar schemaas proposed byRalph Kimball. The normalized approach, also called thethird normal form(3NF) is an entity-relational normalized model proposed by Bill Inmon.[21] In adimensional approach,transaction datais partitioned into "facts", which are usually numeric transaction data, and "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order. This dimensional approach makes data easier to understand and speeds up data retrieval.[15]Dimensional structures are easy for business users to understand because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system, and dimensions are the context about them (Kimball, Ralph 2008). Another advantage is that the dimensional model does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse. The model of facts and dimensions can also be understood as adata cube,[22]where dimensions are the categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates. The main disadvantages of the dimensional approach are: In the normalized approach, the data in the warehouse are stored following, to a degree,database normalizationrules. Normalized relational database tables are grouped intosubject areas(for example, customers, products and finance). When used in large enterprises, the result is dozens of tables linked by a web of joins.(Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. Disadvantages include that, because of the large number of tables, it can be difficult for users to join data from different sources into meaningful information and access the information without a precise understanding of the date sources and thedata structureof the data warehouse. Both normalized and dimensional models can be represented in entity–relationship diagrams because both contain joined relational tables. The difference between them is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008). InInformation-Driven Business,[23]Robert Hillardcompares the two approaches based on the information needs of the business problem. He concludes that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but at the cost of usability. The technique measures information quantity in terms ofinformation entropyand usability in terms of the Small Worlds data transformation measure.[24] In thebottom-upapproach,data martsare first created to provide reporting and analytical capabilities for specificbusiness processes. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection ofconformed dimensionsandconformed facts, which are dimensions that are shared (in a specific way) between facts in two or more data marts.[25] Thetop-downapproach is designed using a normalized enterprisedata model."Atomic" data, that is, data at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse.[26] Data warehouses often resemble thehub and spokes architecture.Legacy systemsfeeding the warehouse often includecustomer relationship managementandenterprise resource planning, generating large amounts of data. To consolidate these various data models, and facilitate theextract transform loadprocess, data warehouses often make use of anoperational data store, the information from which is parsed into the actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse. A hybrid (also called ensemble) data warehouse database is kept onthird normal formto eliminatedata redundancy. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a data warehouse to be replaced with amaster data managementrepository where operational (not static) information could reside. Thedata vault modelingcomponents follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form andstar schema. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which, when built, still requires the use of a data mart or star schema-based release area for business purposes. There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity. Unlike the operational systems, the data in the data warehouse revolves around the subjects of the enterprise. Subject orientation is notdatabase normalization. Subject orientation can be really useful for decision-making. Gathering the required objects is called subject-oriented. The data found within the data warehouse is integrated. Since it comes from several operational systems, all inconsistencies must be removed. Consistencies include naming conventions, measurement of variables, encoding structures, physical attributes of data, and so forth. While operational systems reflect current values as they support day-to-day operations, data warehouse data represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.)[27] The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless there is a regulatory or statutory obligation to do so).[28] In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and drills down to lower levels of details.[27] Withdata virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.[29] The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine-tuned.[30] These terms refer to the level of sophistication of a data warehouse: In thehealthcaresector, data warehouses are critical components ofhealth informatics, enabling the integration, storage, and analysis of large volumes of clinical, administrative, and operational data. These systems consolidate information from disparate sources such aselectronic health records(EHRs),laboratory information systems,picture archiving and communication systems(PACS), andmedical billingplatforms. By centralizing data, healthcare data warehouses support a range of functions includingpopulation health,clinical decision support, quality improvement,public health surveillance, andmedical research. Healthcare data warehouses often incorporate specialized data models that account for the complexity and sensitivity of medical data, such as temporal information (e.g., longitudinal patient histories), coded terminologies (e.g.,ICD-10,SNOMED CT), and compliance with privacy regulations (e.g.,HIPAAin the United States orGDPRin the European Union). Following is a list of major patient data warehouses with broad scope (not disease- orspecialty-specific), with variables including laboratory results, pharmacy, age, race, socioeconomic status, comorbidities and longitudinal changes: These warehouses enable data-driven healthcare by supporting retrospective studies,comparative effectiveness research, andpredictive analytics, often with the use ofhealthcare-applied artificial intelligence.
https://en.wikipedia.org/wiki/Data_warehouse
Incomputational complexity theory, theImmerman–Szelepcsényi theoremstates thatnondeterministic spacecomplexity classesare closed under complementation. It was proven independently byNeil ImmermanandRóbert Szelepcsényiin 1987, for which they shared the 1995Gödel Prize. In its general form the theorem states thatNSPACE(s(n)) = co-NSPACE(s(n)) for any functions(n) ≥ logn. The result is equivalently stated asNL= co-NL; although this is the special case whens(n) = logn, it implies the general theorem by a standardpadding argument.[1]The result solved thesecond LBA problem. In other words, if a nondeterministic machine can solve a problem, another machine with the same resource bounds can solve itscomplementproblem (with theyesandnoanswers reversed) in the same asymptotic amount of space. No similar result is known for thetime complexityclasses, and indeed it is conjectured thatNPis not equal toco-NP. The principle used to prove the theorem has become known asinductive counting. It has also been used to prove other theorems in computational complexity, including the closure ofLOGCFLunder complementation and the existence of error-free randomized logspace algorithms forUSTCON.[2] We prove here that NL = co-NL. The theorem is obtained from this special case by apadding argument. Thest-connectivityproblem asks, given adigraphGand two verticessandt, whether there is a directed path fromstotinG. This problem is NL-complete, therefore its complementst-non-connectivityis co-NL-complete. It suffices to show thatst-non-connectivityis in NL. This proves co-NL ⊆ NL, and by complementation, NL ⊆ co-NL. We fix a digraphG, a source vertexs, and a target vertext. We denote byRkthe set of vertices which are reachable fromsin at mostksteps. Note that iftis reachable froms, it is reachable in at mostn-1steps, wherenis the number of vertices, therefore we are reduced to testing whethert∉Rn-1. We remark thatR0= {s}, andRk+1is the set of verticesvwhich are either inRk, or the target of an edgew→vwherewis inRk. This immediately gives an algorithm to decidet∈Rn, by successively computingR1, …,Rn. However, this algorithm uses too much space to solve the problem in NL, since storing a setRkrequires one bit per vertex. The crucial idea of the proof is that instead of computingRk+1fromRk, it is possible to compute thesizeofRk+1from thesizeofRk, with the help of non-determinism. We iterate over vertices and increment a counter for each vertex that is found to belong toRk+1. The problem is how to determine whetherv∈Rk+1for a given vertexv, when we only have the size ofRkavailable. To this end, we iterate over verticesw, and for eachw, we non-deterministicallyguesswhetherw∈Rk. If we guessw∈Rk, andv=wor there is an edgew→v, then we determine thatvbelongs toRk+1. If this fails for all verticesw, thenvdoes not belong toRk+1. Thus, the computation that determines whethervbelongs toRk+1splits into branches for the different guesses of which vertices belong toRk. A mechanism is needed to make all of these branches abort (reject immediately), except the one where all the guesses were correct. For this, when we have made a “yes-guess” thatw∈Rk, wecheckthis guess, by non-deterministically looking for a path fromstowof length at mostk. If this check fails, we abort the current branch. If it succeeds, we increment a counter of “yes-guesses”. On the other hand, we do not check the “no-guesses” thatw∉Rk(this would require solvingst-non-connectivity, which is precisely the problem that we are solving in the first place). However, at the end of the loop overw, we check that the counter of “yes-guesses” matches the size ofRk, which we know. If there is a mismatch, we abort. Otherwise, all the “yes-guesses” were correct, and there was exactly the right number of them, thus all “no-guesses” were correct as well. This concludes the computation of the size ofRk+1from the size ofRk. Iteratively, we compute the sizes ofR1,R2, …,Rn-2. Finally, we check whethert∈Rn-1, which is possible from the size ofRn-2by the sub-algorithm that is used inside the computation of the size ofRk+1. The followingpseudocodesummarizes the algorithm: As a corollary, in the same article, Immerman proved that, usingdescriptive complexity's equality betweenNLandFO(Transitive Closure), the logarithmic hierarchy, i.e. the languages decided by analternating Turing machinein logarithmic space with a bounded number of alternations, is the same class as NL.
https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi_theorem
TheIrish logarithmwas a system of number manipulation invented byPercy Ludgatefor machine multiplication. The system used a combination of mechanical cams aslookup tablesand mechanical addition to sum pseudo-logarithmic indices to produce partial products, which were then added to produce results.[1] The technique is similar toZech logarithms(also known as Jacobi logarithms), but uses a system of indices original to Ludgate.[2] Ludgate's algorithm compresses the multiplication of two single decimal numbers into twotable lookups(to convert the digits into indices), the addition of the two indices to create a new index which is input to a second lookup table that generates the output product.[3]Because both lookup tables are one-dimensional, and the addition of linear movements is simple to implement mechanically, this allows a less complex mechanism than would be needed to implement a two-dimensional 10×10 multiplication lookup table. Ludgate stated that he deliberately chose the values in his tables to be as small as he could make them; given this, Ludgate's tables can be simply constructed from first principles, either via pen-and-paper methods, or a systematic search using only a few tens of lines of program code.[4]They do not correspond to either Zech logarithms,Remak indexesorKorn indexes.[4] The following is an implementation of Ludgate's Irish logarithm algorithm in thePythonprogramming language: Table 1 is taken from Ludgate's original paper; given the first table, the contents of Table 2 can be trivially derived from Table 1 and the definition of the algorithm. Note since that the last third of the second table is entirely zeros, this could be exploited to further simplify a mechanical implementation of the algorithm.
https://en.wikipedia.org/wiki/Irish_logarithm
Inmathematics, there existmagmasthat arecommutativebut notassociative. A simple example of such a magma may be derived from the children's game ofrock, paper, scissors. Such magmas give rise tonon-associative algebras. A magma which is both commutative and associative is a commutativesemigroup. In the game ofrock paper scissors, letM:={r,p,s}{\displaystyle M:=\{r,p,s\}}, standing for the "rock", "paper" and "scissors" gestures respectively, and consider thebinary operation⋅:M×M→M{\displaystyle \cdot :M\times M\to M}derived from the rules of the game as follows:[1] This results in theCayley table:[1] By definition, the magma(M,⋅){\displaystyle (M,\cdot )}is commutative, but it is also non-associative,[2]as shown by: but i.e. It is the simplest non-associative magma that isconservative, in the sense that the result of any magma operation is one of the two values given as arguments to the operation.[2] Thearithmetic mean, andgeneralized meansof numbers or of higher-dimensional quantities, such asFrechet means, are often commutative but non-associative.[3] Commutative but non-associative magmas may be used to analyzegenetic recombination.[4]
https://en.wikipedia.org/wiki/Commutative_non-associative_magmas