text
stringlengths
21
172k
source
stringlengths
32
113
Software analyticsis theanalyticsspecific to the domain ofsoftware systemstaking into accountsource code, static and dynamic characteristics (e.g.,software metrics) as well as related processes of theirdevelopmentandevolution. It aims at describing, monitoring, predicting, and improving the efficiency and effectiveness ofsoftware engineeringthroughout thesoftware lifecycle, in particular duringsoftware developmentandsoftware maintenance. The data collection is typically done by miningsoftware repositories, but can also be achieved by collecting user actions or production data. Software analytics aims at supporting decisions and generating insights, i.e., findings, conclusions, and evaluations about software systems and their implementation, composition, behavior, quality, evolution as well as about the activities of various stakeholders of these processes. Methods, techniques, and tools of software analytics typically rely on gathering, measuring, analyzing, and visualizing information found in the manifold data sources stored in software development environments and ecosystems. Software systems are well suited for applying analytics because, on the one hand, mostly formalized and precise data is available and, on the other hand, software systems are extremely difficult to manage ---in a nutshell: "software projects are highly measurable, but often unpredictable."[2] Core data sources includesource code, "check-ins, work items, bug reports and test executions [...] recorded in software repositories such as CVS, Subversion, GIT, and Bugzilla."[4]Telemetry dataas well as execution traces or logs can also be taken into account. Automated analysis, massive data, and systematic reasoning support decision-making at almost all levels. In general, key technologies employed by software analytics include analytical technologies such asmachine learning,data mining,statistics,pattern recognition,information visualizationas well as large-scale data computing & processing. For example, software analytics tools allow users to map derived analysis results by means ofsoftware maps, which support interactively exploring system artifacts and correlated software metrics. There are also software analytics tools using analytical technologies on top ofsoftware qualitymodels inagile software developmentcompanies, which support assessing software qualities (e.g., reliability), and deriving actions for their improvement.[5] In 2009, the term "software analytics" was used in a paper byDongmei Zhang, Shi Han, Yingnong Dang, Jian-Guang Lou, and Haidong Zhang in part by the Software Analytics Group (SA) atMicrosoft ResearchAsia (MSRA).[6] The term has since become well known in thesoftware engineeringresearch community after a series of tutorials and talks on software analytics were given by the Software Analytics Group, in collaboration with Tao Xie fromNorth Carolina State University, at software engineering conferences including a tutorial at the IEEE/ACMInternational Conference on Automated Software Engineering(ASE 2011),[7]a talk at the International Workshop on Machine Learning Technologies in Software Engineering (MALETS 2011),[8]a tutorial and a keynote talk given by Zhang at the IEEE-CS Conference on Software Engineering Education and Training,[9][10]a tutorial at the International Conference on Software Engineering - Software Engineering in Practice Track,[11]and a keynote talk given by Zhang at the Working Conference on Mining Software Repositories.[12] In November 2010, Software Development Analytics (Software Analytics with a focus on Software Development) was proposed by Thomas Zimmermann and his colleagues at the Empirical Software Engineering Group (ESE) at Microsoft Research Redmond in their FoSER 2010 paper.[13]A goldfish bowl panel on software development analytics was organized by Zimmermann andTim Menziesfrom West Virginia University at the International Conference on Software Engineering, Software Engineering in Practice Track.[14]
https://en.wikipedia.org/wiki/Runtime_intelligence
TheInstructographwas apaper tape-based machine used for the study ofMorse code. The paper tape mechanism consisted of two reels which passed a paper tape across a reading device that actuated a set of contacts which changed state dependent on the presence or absence of hole punches in the tape. The contacts could operate an audio oscillator for the study ofInternational Morse Code(used by radio), or a sounder for the study ofAmerican Morse Code(used by railroads), or a light bulb (Aldis Lamp- used by Navy ship to ship or byHeliograph). The Instructograph was in production from about 1920 through 1983. The first US patent, No. 1,725,145, was granted to Otto Bernard Kirkpatrick, of Chicago, IL, on August 20, 1929. Most of them would be wound by hand or be plugged into a wall outlet. Most plugin outlet based instructographs would have a set of knobs that can control the speed and volume. The latest version of the Instructograph was the model 500 which included a built in solid state oscillator. This model was available to be purchased as new through at least 1986.
https://en.wikipedia.org/wiki/Instructograph
Incryptography, acertificate authorityorcertification authority(CA) is an entity that stores, signs, and issuesdigital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or on assertions made about the private key that corresponds to the certified public key. A CA acts as a trusted third party—trusted both by the subject (owner) of the certificate and by the party relying upon the certificate.[1]The format of these certificates is specified by theX.509orEMVstandard. One particularly common use for certificate authorities is to sign certificates used inHTTPS, the secure browsing protocol for the World Wide Web. Another common use is in issuing identity cards by national governments for use in electronically signing documents.[2] Trusted certificates can be used to createsecure connectionsto a server via the Internet. A certificate is essential in order to circumvent a malicious party which happens to be on the route to a target server which acts as if it were the target. Such a scenario is commonly referred to as aman-in-the-middle attack. The client uses the CA certificate to authenticate the CA signature on the server certificate, as part of the authorizations before launching a secure connection.[3]Usually, client software—for example, browsers—include a set of trusted CA certificates. This makes sense, as many users need to trust their client software. A malicious or compromised client can skip any security check and still fool its users into believing otherwise. The clients of a CA are server supervisors who call for a certificate that their servers will bestow to users. Commercial CAs charge money to issue certificates, and their customers anticipate the CA's certificate to be contained within the majority of web browsers, so that safe connections to the certified servers work efficiently out-of-the-box. The quantity of web browsers, other devices, and applications which trust a particular certificate authority is referred to as ubiquity.Mozilla, which is a non-profit business, issues several commercial CA certificates with its products.[4]While Mozilla developed their own policy, theCA/Browser Forumdeveloped similar guidelines for CA trust. A single CA certificate may be shared among multiple CAs or theirresellers. ArootCA certificate may be the base to issue multipleintermediateCA certificates with varying validation requirements. In addition to commercial CAs, some non-profits issue publicly-trusted digital certificates without charge, for exampleLet's Encrypt. Some large cloud computing and web hosting companies are also publicly-trusted CAs and issue certificates to services hosted on their infrastructure, for exampleIBM Cloud,Amazon Web Services,Cloudflare, andGoogle Cloud Platform. Large organizations or government bodies may have their own PKIs (public key infrastructure), each containing their own CAs. Any site usingself-signed certificatesacts as its own CA. Commercial banks that issueEMVpayment cards are governed by the EMV Certificate Authority,[5]payment schemes that route payment transactions initiated at Point of Sale Terminals (POS) to a Card Issuing Bank to transfer the funds from the card holder's bank account to the payment recipient's bank account. Each payment card presents along with its card data also the Card Issuer Certificate to the POS. The Issuer Certificate is signed by EMV CA Certificate. The POS retrieves the public key of EMV CA from its storage, validates the Issuer Certificate and authenticity of the payment card before sending the payment request to the payment scheme. Browsers and other clients of sorts characteristically allow users to add or do away with CA certificates at will. While server certificates regularly last for a relatively short period, CA certificates are further extended,[6]so, for repeatedly visited servers, it is less error-prone importing and trusting the CA issued, rather than confirm a security exemption each time the server's certificate is renewed. Less often, trustworthy certificates are used for encrypting or signing messages. CAs dispense end-user certificates too, which can be used withS/MIME. However, encryption entails the receiver's publickeyand, since authors and receivers of encrypted messages, apparently, know one another, the usefulness of a trusted third party remains confined to the signature verification of messages sent to public mailing lists. Worldwide, the certificate authority business is fragmented, with national or regional providers dominating their home market. This is because many uses of digital certificates, such as for legally binding digital signatures, are linked to local law, regulations, and accreditation schemes for certificate authorities. However, the market for globally trustedTLS/SSL server certificatesis largely held by a small number of multinational companies. This market has significantbarriers to entrydue to the technical requirements.[7]While not legally required, new providers may choose to undergo annual security audits (such asWebTrust[8]for certificate authorities in North America andETSIin Europe[9]) to be included as a trusted root by a web browser or operating system. As of 24 August 2020[update], 147 root certificates, representing 52 organizations, are trusted in theMozilla Firefoxweb browser,[10]168 root certificates, representing 60 organizations, are trusted bymacOS,[11]and 255 root certificates, representing 101 organizations, are trusted byMicrosoft Windows.[12]As of Android 4.2 (Jelly Bean), Android currently contains over 100 CAs that are updated with each release.[13] On November 18, 2014, a group of companies and nonprofit organizations, including theElectronic Frontier Foundation, Mozilla, Cisco, and Akamai, announcedLet's Encrypt, a nonprofit certificate authority that provides free domain validatedX.509 certificatesas well as software to enable installation and maintenance of certificates.[14]Let's Encrypt is operated by the newly formedInternet Security Research Group, a California nonprofit recognized as federally tax-exempt.[15] According toNetcraftin May 2015, the industry standard for monitoring active TLS certificates, "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec, Comodo, GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (or VeriSign before it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share."[16] As of July 2024[update]the survey company W3Techs, which collects statistics on certificate authority usage among theAlexatop 10 million and the Tranco top 1 million websites, lists the five largest authorities by absolute usage share as below.[17] The commercial CAs that issue the bulk of certificates for HTTPS servers typically use a technique called "domain validation" to authenticate the recipient of the certificate. The techniques used for domain validation vary between CAs, but in general domain validation techniques are meant to prove that the certificate applicant controls a givendomain name, not any information about the applicant's identity. Many Certificate Authorities also offerExtended Validation(EV) certificates as a more rigorous alternative to domain validated certificates. Extended validation is intended to verify not only control of a domain name, but additional identity information to be included in the certificate. Some browsers display this additional identity information in a green box in the URL bar. One limitation of EV as a solution to the weaknesses of domain validation is that attackers could still obtain a domain validated certificate for the victim domain, and deploy it during an attack; if that occurred, the difference observable to the victim user would be the absence of a green bar with the company name. There is some question whether users would be likely to recognize this absence as indicative of an attack being in progress: a test usingInternet Explorer 7in 2009 showed that the absence of IE7's EV warnings were not noticed by users, however Microsoft's newer browser,Edge Legacy, shows a significantly greater difference between EV and domain validated certificates, with domain validated certificates having a hollow, gray lock. Domain validation suffers from certain structural security limitations. In particular, it is always vulnerable to attacks that allow an adversary to observe the domain validation probes that CAs send. These can include attacks against the DNS, TCP, or BGP protocols (which lack the cryptographic protections of TLS/SSL), or the compromise of routers. Such attacks are possible either on the network near a CA, or near the victim domain itself. One of the most common domain validation techniques involves sending an email containing an authentication token or link to an email address that is likely to be administratively responsible for the domain. This could be the technical contact email address listed in the domain'sWHOISentry, or an administrative email likeadmin@,administrator@,webmaster@,hostmaster@orpostmaster@the domain.[18][19]Some Certificate Authorities may accept confirmation usingroot@,[citation needed]info@, orsupport@in the domain.[20]The theory behind domain validation is that only the legitimate owner of a domain would be able to read emails sent to these administrative addresses. Domain validation implementations have sometimes been a source of security vulnerabilities. In one instance, security researchers showed that attackers could obtain certificates for webmail sites because a CA was willing to use an email address likessladmin@domain.comfor domain.com, but not all webmail systems had reserved the "ssladmin" username to prevent attackers from registering it.[21] Prior to 2011, there was no standard list of email addresses that could be used for domain validation, so it was not clear to email administrators which addresses needed to be reserved. The first version of theCA/Browser ForumBaseline Requirements, adopted November 2011, specified a list of such addresses. This allowed mail hosts to reserve those addresses for administrative use, though such precautions are still not universal. In January 2015, a Finnish man registered the username "hostmaster" at the Finnish version ofMicrosoft Liveand was able to obtain a domain-validated certificate for live.fi, despite not being the owner of the domain name.[22] A CA issuesdigital certificatesthat contain apublic keyand the identity of the owner. The matching private key is not made available publicly, but kept secret by the end user who generated the key pair. The certificate is also a confirmation or validation by the CA that the public key contained in the certificate belongs to the person, organization, server or other entity noted in the certificate. A CA's obligation in such schemes is to verify an applicant's credentials, so that users and relying parties can trust the information in the issued certificate. CAs use a variety of standards and tests to do so. In essence, the certificate authority is responsible for saying "yes, this person is who they say they are, and we, the CA, certify that".[23] If the user trusts the CA and can verify the CA's signature, then they can also assume that a certain public key does indeed belong to whoever is identified in the certificate.[24] Public-key cryptographycan be used to encrypt data communicated between two parties. This can typically happen when a user logs on to any site that implements theHTTP Secureprotocol. In this example let us suppose that the user logs on to their bank's homepage www.bank.example to do online banking. When the user opens www.bank.example homepage, they receive a public key along with all the data that their web-browser displays. The public key could be used to encrypt data from the client to the server but the safe procedure is to use it in a protocol that determines a temporary shared symmetric encryption key; messages in such a key exchange protocol can be enciphered with the bank's public key in such a way that only the bank server has the private key to read them.[25] The rest of the communication then proceeds using the new (disposable) symmetric key, so when the user enters some information to the bank's page and submits the page (sends the information back to the bank) then the data the user has entered to the page will be encrypted by their web browser. Therefore, even if someone can access the (encrypted) data that was communicated from the user to www.bank.example, such eavesdropper cannot read or decipher it. This mechanism is only safe if the user can be sure that it is the bank that they see in their web browser. If the user types in www.bank.example, but their communication is hijacked and a fake website (that pretends to be the bank website) sends the page information back to the user's browser, the fake web-page can send a fake public key to the user (for which the fake site owns a matching private key). The user will fill the form with their personal data and will submit the page. The fake web-page will then get access to the user's data. This is what the certificate authority mechanism is intended to prevent. A certificate authority (CA) is an organization that stores public keys and their owners, and every party in a communication trusts this organization (and knows its public key). When the user's web browser receives the public key from www.bank.example it also receives a digital signature of the key (with some more information, in a so-calledX.509certificate). The browser already possesses the public key of the CA and consequently can verify the signature, trust the certificate and the public key in it: since www.bank.example uses a public key that the certification authority certifies, a fake www.bank.example can only use the same public key. Since the fake www.bank.example does not know the corresponding private key, it cannot create the signature needed to verify its authenticity.[26] It is difficult to assure correctness of match between data and entity when the data are presented to the CA (perhaps over an electronic network), and when the credentials of the person/company/program asking for a certificate are likewise presented. This is why commercial CAs often use a combination of authentication techniques including leveraging government bureaus, the payment infrastructure, third parties' databases and services, and custom heuristics. In some enterprise systems, local forms of authentication such asKerberoscan be used to obtain a certificate which can in turn be used by external relying parties.Notariesare required in some cases to personally know the party whose signature is being notarized; this is a higher standard than is reached by many CAs. According to theAmerican Bar Associationoutline on Online Transaction Management the primary points of US Federal and State statutes enacted regardingdigital signatureshas been to "prevent conflicting and overly burdensome local regulation and to establish that electronic writings satisfy the traditional requirements associated with paper documents." Further the US E-Sign statute and the suggested UETA code[27]help ensure that: Despite the security measures undertaken to correctly verify the identities of people and companies, there is a risk of a single CA issuing a bogus certificate to an imposter. It is also possible to register individuals and companies with the same or very similar names, which may lead to confusion. To minimize this hazard, thecertificate transparencyinitiativeproposes auditing all certificates in a public unforgeable log, which could help in the prevention ofphishing.[28][29] In large-scale deployments, Alice may not be familiar with Bob's certificate authority (perhaps they each have a different CA server), so Bob's certificate may also include his CA's public key signed by a different CA2, which is presumably recognizable by Alice. This process typically leads to a hierarchy or mesh of CAs and CA certificates. A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or misissued certificate until expiry.[30]Hence, revocation is an important part of apublic key infrastructure.[31]Revocation is performed by the issuing CA, which produces acryptographically authenticatedstatement of revocation.[32] For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns.[33]If revocation information is unavailable (either due to accident or an attack), clients must decide whether tofail-hardand treat a certificate as if it is revoked (and so degradeavailability) or tofail-softand treat it as unrevoked (and allow attackers to sidestep revocation).[34] Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services,Web browserslimit the revocation checks they will perform, and will fail-soft where they do.[35]Certificate revocation listsare too bandwidth-costly for routine use, and theOnline Certificate Status Protocolpresents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.[31] The CA/Browser Forum publishes the Baseline Requirements,[41]a list of policies and technical requirements for CAs to follow. These are a requirement for inclusion in the certificate stores of Firefox[42]and Safari.[43] On April 14, 2025, the CA/Browser Forum passed a ballot to reduce SSL/TLS certificates to 47 day maximum term by March 15, 2029.[44] If the CA can be subverted, then the security of the entire system is lost, potentially subverting all the entities that trust the compromised CA. For example, suppose an attacker, Eve, manages to get a CA to issue to her a certificate that claims to represent Alice. That is, the certificate would publicly state that it represents Alice, and might include other information about Alice. Some of the information about Alice, such as her employer name, might be true, increasing the certificate's credibility. Eve, however, would have the all-important private key associated with the certificate. Eve could then use the certificate to send a digitally signed email to Bob, tricking Bob into believing that the email was from Alice. Bob might even respond with encrypted email, believing that it could only be read by Alice, when Eve is actually able to decrypt it using the private key. A notable case of CA subversion like this occurred in 2001, when the certificate authorityVeriSignissued two certificates to a person claiming to represent Microsoft. The certificates have the name "Microsoft Corporation", so they could be used to spoof someone into believing that updates to Microsoft software came from Microsoft when they actually did not. The fraud was detected in early 2001. Microsoft and VeriSign took steps to limit the impact of the problem.[45][46] In 2008, Comodo reseller Certstar sold a certificate for mozilla.com to Eddy Nigg, who had no authority to represent Mozilla.[47] In 2011 fraudulent certificates were obtained from Comodo andDigiNotar,[48][49]allegedly by Iranian hackers. There is evidence that the fraudulent DigiNotar certificates were used in aman-in-the-middle attackin Iran.[50] In 2012, it became known that Trustwave issued a subordinate root certificate that was used for transparent traffic management (man-in-the-middle) which effectively permitted an enterprise to sniff SSL internal network traffic using the subordinate certificate.[51] In 2012, theFlamemalware (also known as SkyWiper) contained modules that had an MD5 collision with a valid certificate issued by a Microsoft Terminal Server licensing certificate that used the broken MD5 hash algorithm. The authors thus was able to conduct acollision attackwith the hash listed in the certificate.[52][53] In 2015, a Chinese certificate authority named MCS Holdings and affiliated withChina's central domain registryissued unauthorized certificates for Google domains.[54][55]Google thus removed both MCS and the root certificate authority fromChromeand have revoked the certificates.[56] An attacker who steals a certificate authority's private keys is able to forge certificates as if they were CA, without needed ongoing access to the CA's systems. Key theft is therefore one of the main risks certificate authorities defend against. Publicly trusted CAs almost always store their keys on ahardware security module(HSM), which allows them to sign certificates with a key, but generally prevent extraction of that key with both physical and software controls. CAs typically take the further precaution of keeping the key for their long-termroot certificatesin an HSM that is keptoffline, except when it is needed to sign shorter-lived intermediate certificates. The intermediate certificates, stored in an online HSM, can do the day-to-day work of signing end-entity certificates and keeping revocation information up to date. CAs sometimes use akey ceremonywhen generating signing keys, in order to ensure that the keys are not tampered with or copied. The critical weakness in the way that the current X.509 scheme is implemented is that any CA trusted by a particular party can then issue certificates for any domain they choose. Such certificates will be accepted as valid by the trusting party whether they are legitimate and authorized or not.[57]This is a serious shortcoming given that the most commonly encountered technology employing X.509 and trusted third parties is the HTTPS protocol. As all major web browsers are distributed to their end-users pre-configured with a list of trusted CAs that numbers in the dozens this means that any one of these pre-approved trusted CAs can issue a valid certificate for any domain whatsoever.[58]The industry response to this has been muted.[59]Given that the contents of a browser's pre-configured trusted CA list is determined independently by the party that is distributing or causing to be installed the browser application there is really nothing that the CAs themselves can do. This issue is the driving impetus behind the development of theDNS-based Authentication of Named Entities(DANE) protocol. If adopted in conjunction withDomain Name System Security Extensions(DNSSEC) DANE will greatly reduce if not eliminate the role of trusted third parties in a domain's PKI.
https://en.wikipedia.org/wiki/Certificate_authority
Crowdsourcinginvolves a large group of dispersed participants contributing or producinggoods or services—including ideas,votes,micro-tasks, and finances—for payment or as volunteers. Contemporary crowdsourcing often involvesdigital platformsto attract and divide work between participants to achieve a cumulative result. Crowdsourcing is not limited to online activity, however, and there are various historical examples of crowdsourcing. The word crowdsourcing is aportmanteauof "crowd" and "outsourcing".[1][2][3]In contrast to outsourcing, crowdsourcing usually involves less specific and more public groups of participants.[4][5][6] Advantages of using crowdsourcing include lowered costs, improved speed, improved quality, increased flexibility, and/or increasedscalabilityof the work, as well as promotingdiversity.[7][8]Crowdsourcing methods include competitions, virtual labor markets, open online collaboration and data donation.[8][9][10][11]Some forms of crowdsourcing, such as in "idea competitions" or "innovation contests" provide ways for organizations to learn beyond the "base of minds" provided by their employees (e.g.Lego Ideas).[12][13][promotion?]Commercial platforms, such asAmazon Mechanical Turk, matchmicrotaskssubmitted by requesters to workers who perform them. Crowdsourcing is also used bynonprofit organizationsto developcommon goods, such asWikipedia.[14] The termcrowdsourcingwas coined in 2006 by two editors atWired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsourcework to the crowd", which quickly led to the portmanteau "crowdsourcing".[15]TheOxford English Dictionarygives a first use: "OED's earliest evidence for crowdsourcing is from 2006, in the writing of J. Howe."[16]The online dictionaryMerriam-Websterdefines it as: "the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers."[17] Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model."[18]Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.[19] Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.[original research?]Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may bepraiseor intellectual satisfaction. Crowdsourcing may produce solutions fromamateursorvolunteersworking in their spare time, from experts, or from small businesses.[15] While the term "crowdsourcing" was popularized online to describe Internet-based activities,[18]some examples of projects, in retrospect, can be described as crowdsourcing. Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded withMontyon Prizes.[44]These included theLeblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and theFourneyron's turbine, when the first hydraulic commercial turbine was developed.[45] In response to a challenge from the French government,Nicolas Appertwon a prize for inventing a new way offood preservationthat involved sealing food in air-tight jars.[46]The British government provided a similar reward to find an easy way to determine a ship'slongitudeinthe Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in theMathematical Tables Projectas an outreach project.[47][unreliable source?]One of the largest crowdsourcing campaigns was a public design contest in 2010 hosted by the Indian government's finance ministry to create a symbol for theIndian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on theDevanagariscript using the letter Ra.[48] A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than what is present in one organization, and undertake problems that would have been too difficult to solve internally.[49]Crowdsourcing allows businesses to submit problems on which contributors can work—on topics such as science, manufacturing, biotech, and medicine—optionally with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks[specify]can be crowdsourced cheaply and effectively.[50] Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use.[51]Urban and transit planning are prime areas for crowdsourcing. For example, from 2008 to 2009, a crowdsourcing project for transit planning in Salt Lake City was created to test the public participation process.[52]Another notable application of crowdsourcing for governmentproblem-solvingisPeer-to-Patent, which was an initiative to improve patent quality in the United States through gathering public input in a structured, productive manner.[53] Researchers have used crowdsourcing systems such as Amazon Mechanical Turk or CloudResearch to aid their research projects by crowdsourcing some aspects of the research process, such asdata collection, parsing, and evaluation to the public. Notable examples include using the crowd to create speech and language databases,[54][55]to conduct user studies,[56]and to run behavioral science surveys and experiments.[57]Crowdsourcing systems provided researchers with the ability to gather large amounts of data, and helped researchers to collect data from populations and demographics they may not have access to locally.[58][failed verification] Artists have also used crowdsourcing systems. In a project called the Sheep Market,Aaron Koblinused Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world.[59]ArtistSam Brownleveraged the crowd by asking visitors of his websiteexplodingdogto send him sentences to use as inspirations for his paintings.[60]Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized.[61]As with other types of uses, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.[62] Innavigation systems, crowdsourcing from 100 million drivers were used byINRIXto collect users' driving times to provide better GPS routing and real-time traffic updates.[63] The use of crowdsourcing in medical and health research is increasing systematically. The process involves outsourcing tasks or gathering input from a large, diverse groups of people, often facilitated through digital platforms, to contribute to medical research, diagnostics, data analysis, promotion, and various healthcare-related initiatives. Usage of this innovative approach supplies a useful community-based method to improve medical services. From funding individual medical cases and innovative devices to supporting research, community health initiatives, and crisis responses, crowdsourcing proves its versatile impact in addressing diverse healthcare challenges.[64] In 2011,UNAIDSinitiated the participatory online policy project to better engage young people in decision-making processes related toAIDS.[65]The project acquired data from 3,497 participants across seventy-nine countries through online and offline forums. The outcomes generally emphasized the importance of youth perspectives in shaping strategies to effectively addressAIDSwhich provided a valuable insight for future community empowerment initiatives. Another approach is sourcing results of clinical algorithms from collective input of participants.[66]Researchers fromSPIEdeveloped a crowdsourcing tool, to train individuals, especially middle and high school students in South Korea, to diagnosemalaria-infected red blood cells. Using a statistical framework, the platform combined expert diagnoses with those from minimally trained individuals, creating a gold standard library. The objective was to swiftly teach people to achieve great diagnosis accuracy without any prior training. Cancer medicinejournal conducted a review of the studies published between January 2005 and June 2016 on crowdsourcing in cancer research, with the usagePubMed,CINAHL,Scopus,PsychINFO, andEmbase.[67]All of them strongly advocate for continuous efforts to refine and expand crowdsourcing applications in academic scholarship. Analysis highlighted the importance of interdisciplinary collaborations and widespread dissemination of knowledge; the review underscored the need to fully harness crowdsourcing's potential to address challenges within cancer research.[67] Crowdsourcing in astronomy was used in the early 19th century by astronomerDenison Olmsted. After being awakened in a late November night due to ameteor showertaking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. "As the cause of 'Falling Stars' is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible", Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists' observations sent to theAmerican Journal of Science and Arts.[68]These responses helped him to make a series of scientific breakthroughs including observing the fact that meteor showers are seen nationwide and fall from space under the influence of gravity. The responses also allowed him to approximate a velocity for the meteors.[69] A more recent version of crowdsourcing in astronomy is NASA's photo organizing project,[70]which asked internet users to browse photos taken from space and try to identify the location the picture is documenting.[71] Behavioral science In the field of behavioral science, crowdsourcing is often used to gather data and insights onhuman behavioranddecision making. Researchers may create online surveys or experiments that are completed by a large number of participants, allowing them to collect a diverse and potentially large amount of data.[57]Crowdsourcing can also be used to gather real-time data on behavior, such as through the use of mobile apps that track and record users' activities and decision making.[72]The use of crowdsourcing in behavioral science has the potential to greatly increase the scope and efficiency of research, and has been used in studies on topics such as psychology research,[73]political attitudes,[74]and social media use.[75] Energy system modelsrequire large and diversedatasets, increasingly so given the trend towards greater temporal and spatial resolution.[76]In response, there have been several initiatives to crowdsource this data. Launched in December 2009,OpenEIis acollaborativewebsiterun by the US government that providesopenenergy data.[77][78]While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world.[79]ThesemanticwikianddatabaseEnipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011.[80][81]: 184–188 Genealogicalresearch used crowdsourcing techniques long before personal computers were common. Beginning in 1942, members ofthe Church of Jesus Christ of Latter-day Saintsencouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more participation, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation program.[82] Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.[citation needed] Genetic genealogy research Genetic genealogyis a combination of traditional genealogy withgenetics. The rise of personal DNA testing, after the turn of the century, by companies such asGene by Gene,FTDNA,GeneTree,23andMe, andAncestry.com, has led to public and semi public databases of DNA testing using crowdsourcing techniques.Citizen scienceprojects have included support, organization, and dissemination ofpersonal DNA (genetic) testing.Similar toamateur astronomy, citizen scientists encouraged by volunteer organizations like theInternational Society of Genetic Genealogy[83]have provided valuable information and research to the professional scientific community.[84]TheGenographic Project, which began in 2005, is a research project carried out by theNational Geographic Society's scientific team to reveal patterns of human migration using crowdsourcedDNAtesting and reporting of results.[85] Another early example of crowdsourcing occurred in the field ofornithology. On 25 December 1900, Frank Chapman, an early officer of theNational Audubon Society, initiated a tradition dubbed the"Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds.[86]This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles.[87]Christmas 2014 marked the National Audubon Society's 115th annualChristmas Bird Count. TheEuropean-Mediterranean Seismological Centre (EMSC)has developed a seismic detection system by monitoring the traffic peaks on its website and analyzing keywords used on Twitter.[88] Crowdsourcing is increasingly used in professional journalism. Journalists are able to organize crowdsourced information by fact checking the information, and then using the information they have gathered in their articles as they see fit.[citation needed]A daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013–2014, which resulted in over 50,000 submissions.[89]A daily newspaper in Finland crowdsourced an investigation into stock short-selling in 2011–2012, and the crowdsourced information led to revelations of atax evasionsystem by a Finnish bank. The bank executive was fired and policy changes followed.[90]TalkingPointsMemoin the United States asked its readers to examine 3,000 emails concerning the firing of federal prosecutors in 2008. The British newspaperThe Guardiancrowdsourced the examination of hundreds of thousands of documents in 2009.[91] Data donation is a crowdsourcing approach to gather digital data. It is used by researchers and organizations to gain access to data from online platforms, websites, search engines and apps and devices. Data donation projects usually rely on participants volunteering their authentic digital profile information. Examples include: Crowdsourcing is used in large scale media, such as thecommunity notessystem of the X platform. Crowdsourcing on such platforms is thought to be effective in combating partisan misinformation on social media when certain conditions are met.[99][100]Success may depend on trust in fact-checking sources, the ability to present information that challenges previous beliefs without causing excessive dissonance, and having a sufficiently large and diverse crowd of participants. Effective crowdsourcing interventions must navigate politically polarized environments where trusted sources may be less inclined to provide dissonant opinions. By leveraging network analysis to connect users with neighboring communities outside their ideological echo chambers, crowdsourcing can provide an additional layer of content moderation. Crowdsourcing public policy and the production of public services is also referred to ascitizen sourcing. While some scholars argue crowdsourcing for this purpose as a policy tool[101]or a definite means of co-production,[102]others question that and argue that crowdsourcing should be considered just as a technological enabler that simply increases speed and ease of participation.[103]Crowdsourcing can also play a role indemocratization.[104] The first conference focusing on Crowdsourcing for Politics and Policy took place atOxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012[105]which focused on the use of crowdsourcing for policy purposes.[106][107]These include experimentally investigating the use of Virtual Labor Markets for policy assessment,[108]and assessing the potential for citizen involvement in process innovation for public administration.[109] Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement.[citation needed]Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws.[citation needed]The crowdsourced information and resolutions would then be passed on to legislators to refer to when making a decision, allowing citizens to contribute to public policy in a more direct manner.[110][111]Palo Altocrowdsources feedback for its Comprehensive City Plan update in a process started in 2015.[112]The House of Representatives in Brazil has used crowdsourcing in policy-reforms.[113] NASAused crowdsourcing to analyze large sets of images. As part of theOpen Government Initiativeof theObama Administration, theGeneral Services Administrationcollected and amalgamated suggestions for improving federal websites.[113] For part of the Obama andTrump Administrations, theWe the Peoplesystem collected signatures on petitions, which were entitled to an official response from theWhite Houseonce a certain number had been reached. Several U.S. federal agencies raninducement prize contests, including NASA and theEnvironmental Protection Agency.[114][113] Crowdsourcing has been used extensively for gathering language-related data. For dictionary work, crowdsourcing was applied over a hundred years ago by theOxford English Dictionaryeditors using paper and postage. It has also been used for collecting examples ofproverbson a specific topic (e.g.religious pluralism) for a printed journal.[115]Crowdsourcing language-related data online has proven very effective and many dictionary compilation projects used crowdsourcing. It is used particularly for specialist topics and languages that are not well documented, such as for theOromo language.[116]Software programs have been developed for crowdsourced dictionaries, such asWeSay.[117]A slightly different form of crowdsourcing for language data was the online creation of scientific and mathematical terminology forAmerican Sign Language.[118] In linguistics, crowdsourcing strategies have been applied to estimate word knowledge, vocabulary size, and word origin.[119]Implicit crowdsourcing on social media has also approximating sociolinguistic data efficiently.Redditconversations in various location-based subreddits were analyzed for the presence of grammatical forms unique to a regional dialect. These were then used to map the extent of the speaker population. The results could roughly approximate large-scale surveys on the subject without engaging in field interviews.[120] Mining publicly available social media conversations can be used as a form of implicit crowdsourcing to approximate the geographic extent of speaker dialects.[120]Proverb collectionis also being done via crowdsourcing on the Web, most notably for thePashto languageof Afghanistan and Pakistan.[121][122][123]Crowdsourcing has been extensively used to collect high-quality gold standards for creating automatic systems in natural language processing (e.g.named entity recognition,entity linking).[124] Organizations often leverage crowdsourcing to gather ideas for new products as well as for the refinement of established product.[41]Lego allows users to work on new product designs while conducting requirements testing. Any user can provide a design for a product, and other users can vote on the product. Once the submitted product has received 10,000 votes, it will be formally reviewed in stages and go into production with no impediments such as legal flaws identified. The creator receives royalties from the net income.[125]Labelling new products as "customer-ideated" through crowdsourcing initiatives, as opposed to not specifying the source of design, leads to a substantial increase in the actual market performance of the products. Merely highlighting the source of design to customers, particularly, attributing the product to crowdsourcing efforts from user communities, can lead to a significant boost in product sales. Consumers perceive "customer-ideated" products as more effective in addressing their needs, leading to a quality inference. The design mode associated with crowdsourced ideas is considered superior in generating promising new products, contributing to the observed increase in market performance.[126] Crowdsourcing is widely used by businesses to source feedback and suggestions on how to improve their products and services.[41]Homeowners can useAirbnbto list their accommodation or unused rooms. Owners set their own nightly, weekly and monthly rates and accommodations. The business, in turn, charges guests and hosts a fee. Guests usually end up spending between $9 and $15.[127]They have to pay a booking fee every time they book a room. The landlord, in turn, pays a service fee for the amount due. The company has 1,500 properties in 34,000 cities in more than 190 countries.[citation needed] Crowdsourcing is frequently used in market research as a way to gather insights and opinions from a large number of consumers.[128]Companies may create online surveys or focus groups that are open to the general public, allowing them to gather a diverse range of perspectives on their products or services. This can be especially useful for companies seeking to understand the needs and preferences of a particular market segment or to gather feedback on the effectiveness of their marketing efforts. The use of crowdsourcing in market research allows companies to quickly and efficiently gather a large amount of data and insights that can inform their business decisions.[129] Internet and digital technologies have massively expanded the opportunities for crowdsourcing. However, the effect of user communication and platform presentation can have a major bearing on the success of an online crowdsourcing project.[19]The crowdsourced problem can range from huge tasks (such as finding alien life or mapping earthquake zones) or very small (identifying images). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, and subjects that people find sympathetic.[145] Crowdsourcing can either take an explicit or an implicit route: In his 2013 book,Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:[147] Ivo Blohm identifies four types of Crowdsourcing Platforms: Microtasking, Information Pooling, Broadcast Search, and Open Collaboration. They differ in the diversity and aggregation of contributions that are created. The diversity of information collected can either be homogenous or heterogenous. The aggregation of information can either be selective or integrative.[definition needed][148]Some common categories of crowdsourcing have been used effectively in the commercial world include crowdvoting, crowdsolving,crowdfunding,microwork,creative crowdsourcing,crowdsource workforce management, andinducement prize contests.[149] In their conceptual review of the crowdsourcing,Linus Dahlander, Lars Bo Jeppesen, and Henning Piezunka distinguish four steps in the crowdsourcing process: Define, Broadcast, Attract, and Select.[150] Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. Some crowdsourcing tools and platforms allow participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most "like" votes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate votes.[citation needed]In recent years, several crowdsourcing companies have begun to use pairwise comparisons backed by ranking algorithms. Ranking algorithms do not penalize late contributions.[citation needed]They also produce results quicker. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking.[151]One drawback, however, is that ranking algorithms are more difficult to understand than vote counting. TheIowa Electronic Marketis a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.[152]Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have crowdsourced a new pizza, bottle design, beer, and song respectively.[153]A website calledThreadlessselected the T-shirts it sold by having users provide designs and vote on the ones they like, which are then printed and available for purchase.[18] TheCalifornia Report Card(CRC), a program jointly launched in January 2014 by theCenter for Information Technology Research in the Interest of Society[154]and Lt. GovernorGavin Newsom, is an example of modern-day crowd voting. Participants access the CRC online and vote on six timely issues. Throughprincipal component analysis, the users are then placed into an online "café" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which people are most concerned. Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicted the success or failure of a movie based on its trailer,[155][156]a feat that was replicated in 2013 by Google.[157] On Reddit, users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and AskScienceonline interviews.[cleanup needed] In 2017,Project Fanchisepurchased a team in theIndoor Football Leagueand created theSalt Lake Screaming Eagles, a fan run team. Using a mobile app, the fans voted on the day-to-day operations of the team, the mascot name, signing of players and evenoffensiveplay callingduring games.[158] Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet.[159]Crowdfunding has been used for both commercial and charitable purposes.[160]The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.[161] Individuals, businesses, and entrepreneurs can showcase their businesses and projects by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images.[citation needed]Funders make monetary contribution for numerous reasons: The dilemma for equity crowdfunding in the US as of 2012 was during a refinement process for the regulations of theSecurities and Exchange Commission, which had until 1 January 2013 to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd-Frank and all the other rules and regulations involving public companies and the way they traded. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allowed for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.[citation needed] Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielded around 46,000 ideas.[163][164]Another example is theNetflix Prizein 2009. People were asked to come up with arecommendation algorithmthat is more accurate than Netflix's current algorithm. It had a grand prize of US$1,000,000, and it was given to a team which designed an algorithm that beat Netflix's own algorithm for predicting ratings by 10.06%.[citation needed] Another example of competition-based crowdsourcing is the 2009DARPA balloonexperiment, whereDARPAplaced 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team.[165]A similar challenge was theTag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.[166] Usingopen innovationplatforms is an effective way to crowdsource people's thoughts and ideas for research and development. The companyInnoCentiveis a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize that ranges from $10,000 to $100,000 per challenge.[18]InnoCentive, ofWaltham, Massachusetts, and London, England, provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. TheX Prize Foundationcreates and runs incentive competitions offering between $1 million and $30 million for solving challenges.Local Motorsis another example of crowdsourcing, and it is a community of 20,000 automotive engineers, designers, and enthusiasts that compete to build off-road rally trucks.[167] Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks.[citation needed]Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.[18] A good example of implicit crowdsourcing is theESP game, where users find words to describe Google images, which are then used asmetadatafor the images. Another popular use of implicit crowdsourcing is throughreCAPTCHA, which asks people to solveCAPTCHAsto prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.[146] Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such asGoogle Ads.[56] Thecrowdis an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about thedemographicsof the crowd as a whole, several studies have examined various specific online platforms. Amazon Mechanical Turk has received a great deal of attention in particular. A study in 2008 byIpeirotisfound that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population where 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.[186]More recent studies have found that U.S. Mechanical Turk workers are approximately 58% female, and nearly 67% of workers are in their 20s and 30s.[57][187][188][189]Close to 80% are White, and 9% are Black. MTurk workers are less likely to be married or have children as compared to the general population. In the US population over 18, 45% are unmarried, while the proportion of unmarried workers on MTurk is around 57%. Additionally, about 55% of MTurk workers do not have any children, which is significantly higher than the general population. Approximately 68% of U.S. workers are employed, compared to 60% in the general population. MTurk workers in the U.S. are also more likely to have a four-year college degree (35%) compared to the general population (27%). Politics within the U.S. sample of MTurk are skewed liberal, with 46% Democrats, 28% Republicans, and 26%  "other". MTurk workers are also less religious than the U.S. population, with 41% religious, 20% spiritual, 21% agnostic, and 16% atheist. The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together accounting for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.[190] Another study of the demographics of the crowd atiStockphotofound a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home.[191]In a crowd-sourcing diary study of 30 days in Europe, the participants were predominantly higher educated women.[144] Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession.[191][192][193][194]Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.[195] Gregory Saxton et al. studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. They developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.[196] Many researchers suggest that bothintrinsicandextrinsicmotivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors.[111][191][192][194][197][198][199][200][201]For example, people employed in a full-time position rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.[198] Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment contributors experience through their participation. These motivations include: skill variety, task identity, taskautonomy, direct feedback from the job, and taking the job as apastime.[citation needed]Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change, and help their peers.[197] Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially,[202]such as thealtruisticmotivations ofonline volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to help researchers identify tumor cells, than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing.[203] Motivation in crowdsourcing is often a mix of intrinsic and extrinsic factors.[204]In a crowdsourced law-making project, the crowd was motivated by both intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one's own interest or more altruistic goals, such as protecting nature.[111]Participants in online research studies report their motivation as both intrinsic enjoyment and monetary gain.[205][206][188] Another form of social motivation is prestige or status. TheInternational Children's Digital Libraryrecruited volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, whereas workers who pay close attention may be rewarded by gaining access to higher-paying tasks or being on an "Approved List" of workers. This system may incentivize higher-quality work.[207]However, this system only works when requesters reject bad work, which many do not.[208] Despite the potential global reach of IT applications online, recent research illustrates that differences in location[which?]affect participation outcomes in IT-mediated crowds.[209] While there it lots of anecdotal evidence that illustrates the potential of crowdsourcing and the benefits that organizations have derived, there is scientific evidence that crowdsourcing initiatives often fail.[210]At least six major topics cover the limitations and controversies about crowdsourcing: Crowdsourcing initiatives often fail to attract sufficient or beneficial contributions. The vast majority of crowdsourcing initiatives hardly attract contributions; an analysis of thousands of organizations' crowdsourcing initiatives illustrates that only the 90th percentile of initiatives attracts more than one contribution a month.[201]While crowdsourcing initiatives may be effective in isolation, when faced with competition they mail fail to attract sufficient contributions. Nagaraj and Piezunka (2024) illustrate thatOpenStreetMapstruggled to attract contributions onceGoogle Mapsentered a country. Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions.[211]Companies, or additional crowdworkers, then have to sort through the low-quality contributions. The task of sorting through crowdworkers' contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead.[212]For example, susceptibility to faulty results can be caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, a financial incentive often causes workers to complete tasks quickly rather than well.[57]Verifying responses is time-consuming, so employers often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.[213]Some companies, likeCloudResearch, control data quality by repeatedly vetting crowdworkers to ensure they are paying attention and providing high-quality work.[208] Crowdsourcing quality is also impacted by task design. Lukyanenkoet al.[214]argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common).[clarification needed]Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices. In behavioral science research, it is often recommended to include open-ended responses, in addition to other forms of attention checks, to assess data quality.[215][216] Just as limiting, oftentimes there is not enough skills or expertise in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. A comparison between the evaluation of business models from experts and an anonymous online crowd showed that an anonymous online crowd cannot evaluate business models to the same level as experts.[217]In these cases, it may be difficult or even impossible to find qualified people in the crowd, as their responses represent only a small fraction of the workers compared to consistent, but incorrect crowd members.[218]However, if the task is "intermediate" in its difficulty, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well,[219]albeit with an additional computation cost.[citation needed] Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by thehuman development index.[220]Participants in these countries sometimes masquerade as U.S. participants to gain access to certain tasks. This led to the "bot scare" on Amazon Mechanical Turk in 2018, when researchers thought bots were completing research surveys due to the lower quality of responses originating from medium-developed countries.[216][221] The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures. This results in a long-tail power law distribution of completion times.[222]Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started.[58]Even when tasks are completed, crowdsourcing does not always produce quality results. WhenFacebookbegan its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations.[223]One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final product, and workers rarely interacts with the final client in the process. This can decrease the quality of product as client interaction is considered to be a vital part of the design process.[224] An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other's knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowd-workers are left to depend on their own knowledge and means to complete tasks.[212] A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most ofopen sourceprojects working onLinux). In many other cases, the end product is the outcome of a single person's endeavor, who creates the majority of the product, while the crowd only participates in minor details.[225] To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur's network and the amount of initial self-generated capital.[citation needed] The crowdsourcing process allows entrepreneurs to access a wide range of investors who can take different stakes in the project.[226]As an effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase the efficiency of projects.[citation needed] Others argue that easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable.[226]This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors on why their project can be successful, wary investors can be replaced by others who are willing to take on the risk. Some translation companies and translation tool consumers pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiringprofessional translators. This situation has been systematically denounced byIAPTIand other translator organizations.[227] The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing. Proponents argue that crowdsourcing is beneficial because it allows the formation of startups with niche ideas that would not surviveventure capitalistorangelfunding, which are oftentimes the primary investors in startups. Many ideas are scrapped in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.[228] Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started.[229]However, when the number of projects grows, the number of failures also increases. Crowdsourcing assists the development of niche and high-risk projects due to a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.[230] Because crowdworkers are considered independent contractors rather than employees, they are not guaranteedminimum wage. In practice, workers using Amazon Mechanical Turk generally earn less than minimum wage. In 2009, it was reported that United States Turk users earned an average of $2.30 per hour for tasks, while users in India earned an average of $1.58 per hour, which is below minimum wage in the United States (but not in India).[186][231]In 2018, a survey of 2,676 Amazon Mechanical Turk workers doing 3.8 million tasks found that the median hourly wage was approximately $2 per hour, and only 4% of workers earned more than the federal minimum wage of $7.25 per hour.[232]Some researchers who have considered using Mechanical Turk to get participants for research studies have argued that the wage conditions might be unethical.[58][233]However, according to other research, workers on Amazon Mechanical Turk do not feel they are exploited and are ready to participate in crowdsourcing activities in the future.[234]A more recent study using stratified random sampling to access a representative sample of Mechanical Turk workers found that the U.S. MTurk population is financially similar to the general population.[188]Workers tend to participate in tasks as a form of paid leisure and to supplement their primary income, and only 7% view it as a full-time job. Overall, workers rated MTurk as less stressful than other jobs. Workers also earn more than previously reported, about $6.50 per hour. They see MTurk as part of the solution to their financial situation and report rare upsetting experiences. They also perceive requesters on MTurk as fairer and more honest than employers outside of the platform.[188] When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelines.[223] Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that employers decide whether users' work is acceptable and reserve the right to withhold pay if it does not meet their standards.[235]Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.[236][195][237] Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents reported working in a team on their submission.[192]Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation, but the site is no longer running.[238]Another platform run by Amazon Mechanical Turk workers and academics, Turkopticon, continues to operate and provides worker reviews on Amazon Mechanical Turk employers.[239] America Onlinesettled the caseHallissey et al. v. America Online, Inc.for $15 million in 2009, after unpaid moderators sued to be paid theminimum wageas employees under the U.S.Fair Labor Standards Act. Besides insufficient compensation and other labor-related disputes, there have also been concerns regardingprivacy violations, the hiring ofvulnerable groups, breaches ofanonymity, psychological damage, the encouragement ofaddictive behaviors, and more.[240]Many but not all of the issues related to crowdworkes overlap with concerns related tocontent moderators.
https://en.wikipedia.org/wiki/Crowd_sourcing
IEEE 802.11ahis awireless networkingprotocol published in 2017[1]calledWi-Fi HaLow[2][3][4](/ˈheɪˌloʊ/) as an amendment of theIEEE 802.11-2007wireless networking standard. It uses900 MHzlicense-exempt bandsto provide extended-rangeWi-Finetworks, compared to conventional Wi-Fi networks operating in the2.4 GHz,5 GHzand6 GHzbands. It also benefits from lower energy consumption, allowing the creation of large groups of stations or sensors that cooperate to share signals, supporting the concept of theInternet of things(IoT).[5]The protocol's low power consumption competes withBluetooth,LoRa,Zigbee, andZ-Wave,[6][7]and has the added benefit of higherdata ratesand wider coverage range.[2] A benefit of 802.11ah is extended range, making it useful for rural communications and offloadingcell phone towertraffic.[8]The other purpose of the protocol is to allow low rate 802.11 wireless stations to be used in the sub-gigahertz spectrum.[5]The protocol is one of the IEEE 802.11 technologies which is the most different from theLANmodel, especially concerning medium contention. A prominent aspect of 802.11ah is the behavior of stations that are grouped to minimize contention on the air media, use relay to extend their reach, use little power thanks to predefined wake/doze periods, are still able to send data at high speed under some negotiated conditions and use sectored antennas. It uses the 802.11a/g specification that is down sampled to provide 26 channels, each of them able to provide 100 kbit/sthroughput. It can cover a one-kilometer radius.[9]It aims at providing connectivity to thousands of devices under anaccess point. The protocol supportsmachine to machine(M2M) markets, likesmart metering.[10] Data rates up to 347 Mbit/s are achieved only with the maximum of four spatial streams using one 16 MHz-wide channel. Variousmodulationschemes andcodingrates are defined by the standard and are represented by aModulation and Coding Scheme(MCS) index value. The table below shows the relationships between the variables that allow for the maximum data rate. TheGuard interval(GI) is defined as the timing betweensymbols. 2 MHz channel uses anFFTof 64, of which: 56OFDMsubcarriers, 52 are for data and 4 arepilot toneswith a carrier separation of 31.25 kHz (2 MHz/64) (32 μs). Each of these subcarriers can be aBPSK,QPSK, 16-QAM, 64-QAMor 256-QAM. The total bandwidth is 2 MHz with an occupied bandwidth of 1.78 MHz. Total symbol duration is 36 or 40microseconds, whichincludesa guard interval of 4 or 8 microseconds.[9] A RelayAccess Point(AP) is an entity that logically consists of a Relay and anetworking station(STA), or client. The relay function allows an AP and stations to exchange frames with one another by the way of a relay. The introduction of a relay allows stations to use higher MCSs (Modulation and Coding Schemes) and reduce the time stations will stay in Active mode. This improves battery life of stations. Relay stations may also provide connectivity for stations located outside the coverage of the AP. There is an overhead cost on overall network efficiency and increased complexity with the use of relay stations. To limit this overhead, the relaying function shall be bi-directional and limited to two hops only. Power-saving stations are divided into two classes: TIM stations and non-TIM stations. TIM stations periodically receive information about traffic buffered for them from the access point in the so-called TIM information element, hence the name. Non-TIM stations use the new Target Wake Time mechanism which enables reducing signaling overhead.[11] Target Wake Time (TWT) is a function that permits an AP to define a specific time or set of times for individual stations to access the medium. The STA (client) and the AP exchange information that includes an expected activity duration to allow the AP to control the amount of contention and overlap among competing STAs. The AP can protect the expected duration of activity with various protection mechanisms. The use of TWT is negotiated between an AP and an STA. Target Wake Time may be used to reduce network energy consumption, as stations that use it can enter a doze state until their TWT arrives. Restricted Access Window allows partitioning of the stations within aBasic Service Set(BSS) into groups and restricting channel access only to stations belonging to a given group at any given time period. It helps to reduce contention and to avoid simultaneous transmissions from a large number of stations hidden from each other.[12][13] Bidirectional TXOP allows an AP and non-AP (STA or client) to exchange a sequence of uplink and downlink frames during a reserved time (transmit opportunity or TXOP). This operation mode is intended to reduce the number of contention-based channel accesses, improve channel efficiency by minimizing the number of frame exchanges required for uplink and downlink data frames, and enable stations to extend battery lifetime by keeping Awake times short. This continuous frame exchange is done both uplink and downlink between the pair of stations. In earlier versions of the standard Bidirectional TXOP was called Speed Frame Exchange.[14] The partition of the coverage area of a Basic Service Set (BSS) into sectors, each containing a subset of stations, is called sectorization. This partitioning is achieved through a set of antennas or a set of synthesized antenna beams to cover different sectors of the BSS. The goal of the sectorization is to reduce medium contention or interference by the reduced number of stations within a sector and/or to allow spatial sharing among overlapping BSS (OBSS) APs or stations. Another WLAN standard for sub-1 GHz bands isIEEE 802.11afwhich, unlike 802.11ah, operates in licensed bands. More specifically, 802.11af operates in the TVwhite space spectrumin theVHFandUHFbands between 54 and 790 MHz usingcognitive radiotechnology.[15]
https://en.wikipedia.org/wiki/IEEE_802.11ah
OTA Bitmap(Over The Air Bitmap) was a specification designed byNokiafor black and white images for mobile phones. The OTA Bitmap was defined by Nokia as part of theirSmart Messagingspecification, to send pictures as a series of one or moreconcatenatedSMStext messages. The format has a maximum size of 255x255 pixels. It is very rare for an OTA bitmap to measure anything other than 72x28 pixels (for Picture Messages) or 72x14/72x13 pixels (forOperator Logos). The specification contains a byte of data to be used for indicating a multicolour image. This was to future-proof the standard, but the advent ofMMSmeant it never got to implementation. The OTA Bitmap format is a monochrome, uncompressed format using one bit per pixel. As the format was designed for cellular phones, there is no standard computer format. It may be stored as abinary fileor as hex (usually without spaces) in a text file. Recognizedextensionis.otb. Before the image itself there is a header. The header is four bytes wide. A typical example is:00 48 1C 01. These are: Other possibilities may be:00 48 0E 01(for 72x14 bitmaps),00 48 0D 01(for 72x13 bitmaps). After the header the image itself starts. This example will use the following 72x28 pixel image. The first 8 pixels, reading right from the top left hand corner are one white (0) followed by seven blacks (1111111), giving the firstbyte, inBinary, as 01111111. Converting from thebinary01111111 tohex, results in the first byte that represents the pixels (7F). The next 8 characters are 8 blacks (11111111 or FF) and so on. When all pixels from the top row are encoded, simply move to the next. There are no markers to indicate a new row, that information is contained in the header. In the case of an OTA bitmap that is not a multiple of eight pixels in width, a single byte is used to convey information from two lines (e.g. two pixels from the first row and six from the second.) This is not the case in some other formats, so it is important to exercise care when converting between OTA and formats likeWBMP. Here is the result of the image converted to OTA. Note to review: there is no write support for OTA format in XnView
https://en.wikipedia.org/wiki/OTA_bitmap
Pseudo-collision attack against up to 46 rounds of SHA-256.[2] SHA-2(Secure Hash Algorithm 2) is a set ofcryptographic hash functionsdesigned by the United StatesNational Security Agency(NSA) and first published in 2001.[3][4]They are built using theMerkle–Damgård construction, from a one-way compression function itself built using theDavies–Meyer structurefrom a specialized block cipher. SHA-2 includes significant changes from its predecessor,SHA-1. The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits:[5]SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. SHA-256 and SHA-512 are hash functions whose digests are eight 32-bit and 64-bit words, respectively. They use different shift amounts and additive constants, but their structures are otherwise virtually identical, differing only in the number of rounds. SHA-224 and SHA-384 are truncated versions of SHA-256 and SHA-512 respectively, computed with different initial values. SHA-512/224 and SHA-512/256 are also truncated versions of SHA-512, but the initial values are generated using the method described inFederal Information Processing Standards(FIPS) PUB 180-4. SHA-2 was first published by theNational Institute of Standards and Technology(NIST) as a U.S. federal standard. The SHA-2 family of algorithms are patented in the U.S.[6]The United States has released the patent under aroyalty-freelicense.[5] As of 2011,[update]the best public attacks breakpreimage resistancefor 52 out of 64 rounds of SHA-256 or 57 out of 80 rounds of SHA-512, andcollision resistancefor 46 out of 64 rounds of SHA-256.[1][2] With the publication of FIPS PUB 180-2, NIST added three additional hash functions in the SHA family. The algorithms are collectively known as SHA-2, named after their digest lengths (in bits): SHA-256, SHA-384, and SHA-512. The algorithms were first published in 2001 in the draft FIPS PUB 180-2, at which time public review and comments were accepted. In August 2002, FIPS PUB 180-2 became the newSecure Hash Standard, replacing FIPS PUB 180-1, which was released in April 1995. The updated standard included the original SHA-1 algorithm, with updated technical notation consistent with that describing the inner workings of the SHA-2 family.[4] In February 2004, a change notice was published for FIPS PUB 180-2, specifying an additional variant, SHA-224, defined to match the key length of two-keyTriple DES.[7]In October 2008, the standard was updated in FIPS PUB 180-3, including SHA-224 from the change notice, but otherwise making no fundamental changes to the standard. The primary motivation for updating the standard was relocating security information about the hash algorithms and recommendations for their use to Special Publications 800-107 and 800-57.[8][9][10]Detailed test data and example message digests were also removed from the standard, and provided as separate documents.[11] In January 2011, NIST published SP800-131A, which specified a move from the then-current minimum of 80-bit security (provided by SHA-1) allowable for federal government use until the end of 2013, to 112-bit security (provided by SHA-2) being both the minimum requirement (starting in 2014) and the recommendedsecurity level(starting from the publication date in 2011).[12] In March 2012, the standard was updated in FIPS PUB 180-4, adding the hash functions SHA-512/224 and SHA-512/256, and describing a method for generating initial values for truncated versions of SHA-512. Additionally, a restriction onpaddingthe input data prior to hash calculation was removed, allowing hash data to be calculated simultaneously with content generation, such as a real-time video or audio feed. Padding the final data block must still occur prior to hash output.[13] In July 2012, NIST revised SP800-57, which provides guidance for cryptographic key management. The publication disallowed creation of digital signatures with a hash security lower than 112 bits after 2013. The previous revision from 2007 specified the cutoff to be the end of 2010.[10]In August 2012, NIST revised SP800-107 in the same manner.[9] TheNIST hash function competitionselected a new hash function,SHA-3, in 2012.[14]The SHA-3 algorithm is not derived from SHA-2. The SHA-2 hash function is implemented in some widely used security applications and protocols, includingTLSandSSL,PGP,SSH,S/MIME, andIPsec. The inherent computational demand of SHA-2 algorithms has driven the proposal of more efficient solutions, such as those based on application-specific integrated circuits (ASICs) hardware accelerators.[15] SHA-256 is used for authenticatingDebiansoftware packages[16]and in theDKIMmessage signing standard; SHA-512 is part of a system to authenticate archival video from theInternational Criminal Tribunal of the Rwandan genocide.[17]SHA-256 and SHA-512 are used inDNSSEC.[18]Linux distributions usually use 512-bit SHA-2 for secure password hashing.[19][20] Severalcryptocurrencies, includingBitcoin, use SHA-256 for verifying transactions and calculatingproof of work[21]orproof of stake.[22]The rise ofASICSHA-2 accelerator chips has led to the use ofscrypt-based proof-of-work schemes. SHA-1 and SHA-2 are theSecure Hash Algorithmsrequired by law for use in certainU.S. Governmentapplications, including use within other cryptographic algorithms and protocols, for the protection of sensitive unclassified information. FIPS PUB 180-1 also encouraged adoption and use of SHA-1 by private and commercial organizations. SHA-1 is being retired for most government uses; the U.S. National Institute of Standards and Technology says, "Federal agenciesshouldstop using SHA-1 for...applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010" (emphasis in original).[23]NIST's directive that U.S. government agencies ought to, but not explicitly must, stop uses of SHA-1 after 2010[24]was hoped to accelerate migration away from SHA-1. The SHA-2 functions were not quickly adopted initially, despite better security than SHA-1. Reasons might include lack of support for SHA-2 on systems running Windows XP SP2 or older[25]and a lack of perceived urgency since SHA-1 collisions had not yet been found. TheGoogle Chrometeam announced a plan to make their web browser gradually stop honoring SHA-1-dependent TLS certificates over a period from late 2014 and early 2015.[26][27][28]Similarly, Microsoft announced[29]thatInternet ExplorerandEdge [Legacy]would stop honoring public SHA-1-signed TLS certificates from February 2017.Mozilladisabled SHA-1 in early January 2016, but had to re-enable it temporarily via aFirefoxupdate, after problems with web-based user interfaces of some router models andsecurity appliances.[30] For a hash function for whichLis the number ofbitsin themessage digest, finding a message that corresponds to a given message digest can always be done using abrute forcesearch in 2Levaluations. This is called apreimage attackand may or may not be practical depending onLand the particular computing environment. The second criterion, finding two different messages that produce the same message digest, known as acollision, requires on average only 2L/2evaluations using abirthday attack. Some of the applications that use cryptographic hashes, such as password storage, are only minimally affected by acollision attack. Constructing a password that works for a given account requires a preimage attack, as well as access to the hash of the original password (typically in theshadowfile) which may or may not be trivial. Reversing password encryption (e.g., to obtain a password to try against a user's account elsewhere) is not made possible by the attacks. (However, even a secure password hash cannot prevent brute-force attacks onweak passwords.) In the case of document signing, an attacker could not simply fake a signature from an existing document—the attacker would have to produce a pair of documents, one innocuous and one damaging, and get the private key holder to sign the innocuous document. There are practical circumstances in which this is possible; until the end of 2008, it was possible to create forgedSSLcertificates using anMD5collision which would be accepted by widely used web browsers.[31] Increased interest in cryptographic hash analysis during the SHA-3 competition produced several new attacks on the SHA-2 family, the best of which are given in the table below. Only the collision attacks are of practical complexity; none of the attacks extend to the full round hash function. AtFSE2012, researchers atSonygave a presentation suggesting pseudo-collision attacks could be extended to 52 rounds on SHA-256 and 57 rounds on SHA-512 by building upon thebicliquepseudo-preimage attack.[32] Implementations of all FIPS-approved security functions can be officially validated through theCMVP program, jointly run by theNational Institute of Standards and Technology(NIST) and theCommunications Security Establishment(CSE). For informal verification, a package to generate a high number of test vectors is made available for download on the NIST site; the resulting verification, however, does not replace the formal CMVP validation, which is required by law[citation needed]for certain applications. As of December 2013,[update]there are over 1300 validated implementations of SHA-256 and over 900 of SHA-512, with only 5 of them being capable of handling messages with a length in bits not a multiple of eight while supporting both variants.[41] Hash values of an empty string (i.e., a zero-length input text). Even a small change in the message will (with overwhelming probability) result in a different hash, due to theavalanche effect. For example, adding a period to the end of the following sentence changes approximately half (111 out of 224) of the bits in the hash, equivalent to picking a new hash at random: Pseudocode for the SHA-256 algorithm follows. Note the great increase in mixing between bits of thew[16..63]words compared to SHA-1. The computation of thechandmajvalues can be optimized the same wayas described for SHA-1. SHA-224 is identical to SHA-256, except that: SHA-512 is identical in structure to SHA-256, but: SHA-384 is identical to SHA-512, except that: SHA-512/t is identical to SHA-512 except that: TheSHA-512/t IV generation functionevaluates amodified SHA-512on the ASCII string "SHA-512/t", substituted with the decimal representation oft. Themodified SHA-512is the same as SHA-512 except its initial valuesh0throughh7have each beenXORedwith the hexadecimal constant0xa5a5a5a5a5a5a5a5. Sample C implementation for SHA-2 family of hash functions can be found inRFC6234. In the table below,internal statemeans the "internal hash sum" after each compression of a data block. In the bitwise operations column, "Rot" stands forrotate no carry, and "Shr" stands forright logical shift. All of these algorithms employmodular additionin some fashion except for SHA-3. More detailed performance measurements on modern processor architectures are given in the table below. The performance numbers labeled 'x86' were running using 32-bit code on 64-bit processors, whereas the 'x86-64' numbers are native 64-bit code. While SHA-256 is designed for 32-bit calculations, it does benefit from code optimized for 64-bit processors on the x86 architecture. 32-bit implementations of SHA-512 are significantly slower than their 64-bit counterparts. Variants of both algorithms with different output sizes will perform similarly, since the message expansion and compression functions are identical, and only the initial hash values and output sizes are different. The best implementations of MD5 and SHA-1 perform between 4.5 and 6 cycles per byte on modern processors. Testing was performed by theUniversity of Illinois at Chicagoon their hydra8 system running an Intel Xeon E3-1275 V2 at a clock speed of 3.5 GHz, and on their hydra9 system running an AMD A10-5800K APU at a clock speed of 3.8 GHz.[47]The referenced cycles per byte speeds above are the median performance of an algorithm digesting a 4,096 byte message using the SUPERCOP cryptographic benchmarking software.[48]The MiB/s performance is extrapolated from the CPU clockspeed on a single core; real-world performance will vary due to a variety of factors. Cryptography libraries that support SHA-2: Hardware acceleration is provided by the following processor extensions:
https://en.wikipedia.org/wiki/SHA-2
Taxicab geometryorManhattan geometryisgeometrywhere the familiarEuclidean distanceis ignored, and thedistancebetween twopointsis instead defined to be the sum of theabsolute differencesof their respectiveCartesian coordinates, a distance function (ormetric) called thetaxicab distance,Manhattan distance, orcity block distance. The name refers to the island ofManhattan, or generically any planned city with arectangular gridof streets, in which a taxicab can only travel along grid directions. In taxicab geometry, the distance between any two points equals the length of their shortest grid path. This different definition of distance also leads to a different definition of the length of a curve, for which aline segmentbetween any two points has the same length as a grid path between those points rather than its Euclidean length. The taxicab distance is also sometimes known asrectilinear distanceorL1distance (seeLpspace).[1]This geometry has been used inregression analysissince the 18th century, and is often referred to asLASSO. Its geometric interpretation dates tonon-Euclidean geometryof the 19th century and is due toHermann Minkowski. In the two-dimensionalreal coordinate spaceR2{\displaystyle \mathbb {R} ^{2}}, the taxicab distance between two points(x1,y1){\displaystyle (x_{1},y_{1})}and(x2,y2){\displaystyle (x_{2},y_{2})}is|x1−x2|+|y1−y2|{\displaystyle \left|x_{1}-x_{2}\right|+\left|y_{1}-y_{2}\right|}. That is, it is the sum of theabsolute valuesof the differences in both coordinates. The taxicab distance,dT{\displaystyle d_{\text{T}}}, between two pointsp=(p1,p2,…,pn)andq=(q1,q2,…,qn){\displaystyle \mathbf {p} =(p_{1},p_{2},\dots ,p_{n}){\text{ and }}\mathbf {q} =(q_{1},q_{2},\dots ,q_{n})}in ann-dimensionalreal coordinate spacewith fixedCartesian coordinate system, is the sum of the lengths of the projections of theline segmentbetween the points onto thecoordinate axes. More formally,dT(p,q)=‖p−q‖T=∑i=1n|pi−qi|{\displaystyle d_{\text{T}}(\mathbf {p} ,\mathbf {q} )=\left\|\mathbf {p} -\mathbf {q} \right\|_{\text{T}}=\sum _{i=1}^{n}\left|p_{i}-q_{i}\right|}For example, inR2{\displaystyle \mathbb {R} ^{2}}, the taxicab distance betweenp=(p1,p2){\displaystyle \mathbf {p} =(p_{1},p_{2})}andq=(q1,q2){\displaystyle \mathbf {q} =(q_{1},q_{2})}is|p1−q1|+|p2−q2|.{\displaystyle \left|p_{1}-q_{1}\right|+\left|p_{2}-q_{2}\right|.} TheL1metric was used inregression analysis, as a measure ofgoodness of fit, in 1757 byRoger Joseph Boscovich.[2]The interpretation of it as a distance between points in a geometric space dates to the late 19th century and the development ofnon-Euclidean geometries. Notably it appeared in 1910 in the works of bothFrigyes RieszandHermann Minkowski. The formalization ofLpspaces, which include taxicab geometry as a special case, is credited to Riesz.[3]In developing thegeometry of numbers,Hermann Minkowskiestablished hisMinkowski inequality, stating that these spaces definenormed vector spaces.[4] The nametaxicab geometrywas introduced byKarl Mengerin a 1952 bookletYou Will Like Geometry, accompanying a geometry exhibit intended for the general public at theMuseum of Science and Industryin Chicago.[5] Thought of as an additional structure layered onEuclidean space, taxicab distance depends on theorientationof the coordinate system and is changed by Euclideanrotationof the space, but is unaffected bytranslationor axis-alignedreflections. Taxicab geometry satisfies all ofHilbert's axioms(a formalization ofEuclidean geometry) except that the congruence of angles cannot be defined to precisely match the Euclidean concept, and under plausible definitions of congruent taxicab angles, theside-angle-side axiomis not satisfied as in general triangles with two taxicab-congruent sides and a taxicab-congruent angle between them are notcongruent triangles. In anymetric space, asphereis a set of points at a fixed distance, theradius, from a specificcenterpoint. Whereas a Euclidean sphere is round and rotationally symmetric, under the taxicab distance, the shape of a sphere is across-polytope, then-dimensional generalization of aregular octahedron, whose pointsp{\displaystyle \mathbf {p} }satisfy the equation: wherec{\displaystyle \mathbf {c} }is the center andris the radius. Pointsp{\displaystyle \mathbf {p} }on theunit sphere, a sphere of radius 1 centered at theorigin, satisfy the equationdT(p,0)=∑i=1n|pi|=1.{\textstyle d_{\text{T}}(\mathbf {p} ,\mathbf {0} )=\sum _{i=1}^{n}|p_{i}|=1.} In two dimensional taxicab geometry, the sphere (called acircle) is asquareoriented diagonally to the coordinate axes. The image to the right shows in red the set of all points on a square grid with a fixed distance from the blue center. As the grid is made finer, the red points become more numerous, and in the limit tend to a continuous tilted square. Each side has taxicab length 2r, so thecircumferenceis 8r. Thus, in taxicab geometry, the value of the analog of the circle constantπ, the ratio of circumference todiameter, is equal to 4. A closedball(or closeddiskin the 2-dimensional case) is a filled-in sphere, the set of points at distance less than or equal to the radius from a specific center. Forcellular automataon a square grid, a taxicabdiskis thevon Neumann neighborhoodof rangerof its center. A circle of radiusrfor theChebyshev distance(L∞metric) on a plane is also a square with side length 2rparallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between L1and L∞metrics does not generalize to higher dimensions. Whenever each pair in a collection of these circles has a nonempty intersection, there exists an intersection point for the whole collection; therefore, the Manhattan distance forms aninjective metric space. Lety=f(x){\displaystyle y=f(x)}be acontinuously differentiablefunction. Lets{\displaystyle s}be the taxicabarc lengthof thegraphoff{\displaystyle f}on some interval[a,b]{\displaystyle [a,b]}. Take apartitionof the interval into equal infinitesimal subintervals, and letΔsi{\displaystyle \Delta s_{i}}be the taxicab length of theith{\displaystyle i^{\text{th}}}subarc. Then[6] Δsi=Δxi+Δyi=Δxi+|f(xi)−f(xi−1)|.{\displaystyle \Delta s_{i}=\Delta x_{i}+\Delta y_{i}=\Delta x_{i}+|f(x_{i})-f(x_{i-1})|.} By themean value theorem, there exists some pointxi∗{\displaystyle x_{i}^{*}}betweenxi{\displaystyle x_{i}}andxi−1{\displaystyle x_{i-1}}such thatf(xi)−f(xi−1)=f′(xi∗)dxi{\displaystyle f(x_{i})-f(x_{i-1})=f'(x_{i}^{*})dx_{i}}.[7]Then the previous equation can be written Δsi=Δxi+|f′(xi∗)|Δxi=Δxi(1+|f′(xi∗)|).{\displaystyle \Delta s_{i}=\Delta x_{i}+|f'(x_{i}^{*})|\Delta x_{i}=\Delta x_{i}(1+|f'(x_{i}^{*})|).} Thens{\displaystyle s}is given as the sum of every partition ofs{\displaystyle s}on[a,b]{\displaystyle [a,b]}as they getarbitrarily small. s=limn→∞∑i=1nΔxi(1+|f′(xi∗)|)=∫ab1+|f′(x)|dx{\displaystyle {\begin{aligned}s&=\lim _{n\to \infty }\sum _{i=1}^{n}\Delta x_{i}(1+|f'(x_{i}^{*})|)\\&=\int _{a}^{b}1+|f'(x)|\,dx\end{aligned}}}To test this, take the taxicab circle ofradiusr{\displaystyle r}centered at the origin. Its curve in the firstquadrantis given byf(x)=−x+r{\displaystyle f(x)=-x+r}whose length is s=∫0r1+|−1|dx=2r{\displaystyle s=\int _{0}^{r}1+|-1|dx=2r} Multiplying this value by4{\displaystyle 4}to account for the remaining quadrants gives8r{\displaystyle 8r}, which agrees with thecircumferenceof a taxicab circle.[8]Now take theEuclideancircle of radiusr{\displaystyle r}centered at the origin, which is given byf(x)=r2−x2{\displaystyle f(x)={\sqrt {r^{2}-x^{2}}}}. Its arc length in the first quadrant is given by s=∫0r1+|−xr2−x2|dx=x+r2−x2|0r=r−(−r)=2r{\displaystyle {\begin{aligned}s&=\int _{0}^{r}1+\left|{\frac {-x}{\sqrt {r^{2}-x^{2}}}}\right|dx\\&=\left.x+{\sqrt {r^{2}-x^{2}}}\right|_{0}^{r}\\&=r-(-r)\\&=2r\end{aligned}}} Accounting for the remaining quadrants gives4×2r=8r{\displaystyle 4\times 2r=8r}again. Therefore, thecircumferenceof the taxicab circle and theEuclideancircle in the taxicabmetricare equal.[9]In fact, for any functionf{\displaystyle f}that is monotonic anddifferentiablewith a continuousderivativeover an interval[a,b]{\displaystyle [a,b]}, the arc length off{\displaystyle f}over[a,b]{\displaystyle [a,b]}is(b−a)+∣f(b)−f(a)∣{\displaystyle (b-a)+\mid f(b)-f(a)\mid }.[10] Two triangles are congruent if and only if three corresponding sides are equal in distance and three corresponding angles are equal in measure. There are several theorems that guaranteetriangle congruencein Euclidean geometry, namely Angle-Angle-Side (AAS), Angle-Side-Angle (ASA), Side-Angle-Side (SAS), and Side-Side-Side (SSS). In taxicab geometry, however, only SASAS guarantees triangle congruence.[11] Take, for example, two right isosceles taxicab triangles whose angles measure 45-90-45. The two legs of both triangles have a taxicab length 2, but thehypotenusesare not congruent. This counterexample eliminates AAS, ASA, and SAS. It also eliminates AASS, AAAS, and even ASASA. Having three congruent angles and two sides does not guarantee triangle congruence in taxicab geometry. Therefore, the only triangle congruence theorem in taxicab geometry is SASAS, where all three corresponding sides must be congruent and at least two corresponding angles must be congruent.[12]This result is mainly due to the fact that the length of a line segment depends on its orientation in taxicab geometry. In solving anunderdetermined systemof linear equations, theregularizationterm for the parameter vector is expressed in terms of theℓ1{\displaystyle \ell _{1}}norm (taxicab geometry) of the vector.[13]This approach appears in the signal recovery framework calledcompressed sensing. Taxicab geometry can be used to assess the differences in discrete frequency distributions. For example, inRNA splicingpositional distributions ofhexamers, which plot the probability of each hexamer appearing at each givennucleotidenear a splice site, can be compared with L1-distance. Each position distribution can be represented as a vector where each entry represents the likelihood of the hexamer starting at a certain nucleotide. A large L1-distance between the two vectors indicates a significant difference in the nature of the distributions while a small distance denotes similarly shaped distributions. This is equivalent to measuring the area between the two distribution curves because the area of each segment is the absolute difference between the two curves' likelihoods at that point. When summed together for all segments, it provides the same measure as L1-distance.[14]
https://en.wikipedia.org/wiki/Taxicab_geometry
Asmear campaign, also referred to as asmear tacticor simply asmear, is an effort to damage or call into question someone'sreputation, by propounding negativepropaganda.[1]It makes use ofdiscrediting tactics. It can be applied to individuals or groups. Common targets are public officials,politicians,heads of state,political candidates,activists, celebrities (especially those who are involved in politics), and ex-spouses. The term also applies in other contexts, such as the workplace.[2]The termsmear campaignbecame popular around the year 1936.[3] A smear campaign is an intentional, premeditated effort to undermine an individual's or group's reputation, credibility, andcharacter.[4]Likenegative campaigning, most often smear campaigns target government officials, politicians, political candidates, and other public figures.[5]However, public relations campaigns might also employ smear tactics in the course of managing an individual or institutional brand to target competitors and potential threats.[6]Discrediting tactics are used to discourage people from believing in the figure or supporting their cause, such as the use ofdamaging quotations. Smear tactics differ from normal discourse or debate in that they do not bear upon the issues or arguments in question. A smear is a simple attempt to malign a group or an individual with the aim of undermining their credibility. Smears often consist ofad hominemattacks in the form of unverifiable rumors anddistortions,half-truths, or even outrightlies; smear campaigns are often propagated bygossip magazines. Even when the facts behind a smear campaign are demonstrated to lack proper foundation, the tactic is often effective because the target's reputation is tarnished before the truth is known. Smear campaigns can also be used as acampaign tacticassociated withtabloid journalism, which is a type of journalism that presents little well-researched news and instead uses eye-catching headlines, scandal-mongering andsensationalism. For example, duringGary Hart's 1988 presidential campaign (see below), theNew York Postreported on its front page big, black block letters: "GARY: I'M NO WOMANIZER."[7][8] Smears are also effective in diverting attention away from the matter in question and onto a specific individual or group. The target of the smear typically must focus on correcting thefalse informationrather than on the original issue. Deflection has been described as awrap-up smear: "You make up something. Then you have the press write about it. And then you say, everybody is writing about this charge".[9] In the U.S. judicial system, discrediting tactics (calledwitness impeachment) are the approved method for attacking the credibility of any witness in court, including aplaintiffordefendant. In cases with significantmass mediaattention or high-stakes outcomes, those tactics often take place in public as well. Logically, an argument is held in discredit if the underlying premise is found, "So severely in error that there is cause to remove the argument from the proceedings because of its prejudicial context and application...".Mistrialproceedings in civil and criminal courts do not always require that an argument brought by defense or prosecution be discredited, however appellate courts must consider the context and may discredit testimony as perjurious or prejudicial, even if the statement is technically true. Smear tactics are commonly used to undermine effective arguments or critiques. During the 1856 presidential election,John C. Frémontwas the target of a smear campaign alleging that he was aCatholic, among other accusations. The campaign was designed to undermine support for Fremont from those who weresuspicious of Catholics.[10] Ralph Naderwas the victim of a smear campaign during the 1960s, when he was campaigning for car safety. In order to smear Nader and deflect public attention from his campaign,General Motorsengaged private investigators to search for damaging or embarrassing incidents from his past. In early March 1966, several media outlets, includingThe New RepublicandThe New York Times, reported that GM had tried to discredit Nader, hiring private detectives totap his phonesand investigate his past and hiring prostitutes to trap him in compromising situations.[11][12]Nader sued the company forinvasion of privacyand settled the case for $284,000. Nader's lawsuit against GM was ultimately decided by theNew York Court of Appeals, whose opinion in the case expandedtort lawto cover "overzealous surveillance."[13]Nader used the proceeds from the lawsuit to start the pro-consumer Center for Study of Responsive Law. Gary Hartwas the target of a smear campaign during the 1988 US presidential campaign. TheNew York Postonce reported on its front page big, black block letters: "GARY: I'M NO WOMANIZER."[7][8] In 2011, China launched a smear campaign againstApple, including TV and radio advertisements and articles in state-run papers. The campaign failed to turn the Chinese public against the company and its products.[14] Chris Bryant, a British parliamentarian, accused Russia in 2012 of orchestrating a smear campaign against him because of his criticism ofVladimir Putin.[15]In 2017 he alleged that other British officials are vulnerable to Russian smear campaigns.[16][17] In 2024,The New York Timesreported on an alleged smear campaign conducted against actressBlake Livelyafter she accusedJustin Baldoniof misconduct.[18]The alleged smear campaign allegedly pushed negative stories about Lively and used social media to boost those stories. In January 2025, Baldoni filed a suit in the federal District Court for the Southern District of New York against Blake, her husband Ryan Reynolds, and publicist, for $400 million in damages alleging civil extortion, defamation, and a slew of contract-related claims.[19] In January 2007, it was revealed that an anonymous website that attacked critics ofOverstock.com, including media figures and private citizens on message boards, was operated by an official of Overstock.com.[20][21] In 2023,The New Yorkerreported thatMohamed bin Zayedwas paying millions of euros to a Swiss firm, Alp Services for orchestrating asmear campaignto defame the Emirati targets, including Qatar and the Muslim Brotherhood. Under the ‘dark PR’, Alp posted false and defamatory Wikipedia entries against them. The Emirates also paid the Swiss firm to publish propaganda articles against the targets. Multiple meetings took place between the Alp Services headMario Breroand an Emirati official, Matar Humaid al-Neyadi. However, Alp’s bills were sent directly to MbZ. The defamation campaign also targeted an American, Hazim Nada, and his firm, Lord Energy, because his fatherYoussef Nadahad joined the Muslim Brotherhood as a teenager.[22]
https://en.wikipedia.org/wiki/Smear_campaign
Mādhava of Sangamagrāma(Mādhavan)[4](c.1340– c.1425) was an Indianmathematicianandastronomerwho is considered to be the founder of theKerala school of astronomy and mathematicsin theLate Middle Ages. Madhava made pioneering contributions to the study ofinfinite series,calculus,trigonometry,geometryandalgebra. He was the first to use infinite series approximations for a range of trigonometric functions, which has been called the "decisive step onward from the finite procedures of ancient mathematics to treat theirlimit-passage toinfinity".[1] Little is known about Madhava's life with certainty. However, from scattered references to Madhava found in diverse manuscripts, historians of Kerala school have pieced together information about the mathematician. In a manuscript preserved in the Oriental Institute, Baroda, Madhava has been referred to asMādhavan vēṇvārōhādīnām karttā ... Mādhavan Ilaññippaḷḷi Emprān.[4]It has been noted that the epithet 'Emprān' refers to theEmprāntiricommunity, to which Madhava might have belonged.[5] The term "Ilaññippaḷḷi" has been identified as a reference to the residence of Madhava. This is corroborated by Madhava himself. In his short work on the moon's positions titledVeṇvāroha, Madhava says that he was born in a house namedbakuḷādhiṣṭhita . . . vihāra.[6]This is clearly Sanskrit forIlaññippaḷḷi.Ilaññiis the Malayalam name of the evergreen treeMimusops elengiand the Sanskrit name for the same isBakuḷa. Palli is a term for village. The Sanskrit house namebakuḷādhiṣṭhita . . . vihārahas also been interpreted as a reference to the Malayalam house nameIraññi ninna ppaḷḷiand some historians have tried to identify it with one of two currently existing houses with namesIriññanavaḷḷiandIriññārapaḷḷiboth of which are located nearIrinjalakudatown in central Kerala.[6]This identification is far fetched because both names have neither phonetic similarity nor semantic equivalence to the word "Ilaññippaḷḷi".[5] Most of the writers of astronomical and mathematical works who lived after Madhava's period have referred to Madhava as "Sangamagrama Madhava" and as such it is important that the real import of the word "Sangamagrama" be made clear. The general view among many scholars is that Sangamagrama is the town ofIrinjalakudasome 70 kilometers south of the Nila river and about 70 kilometers south ofCochin.[5]It seems that there is not much concrete ground for this belief except perhaps the fact that the presiding deity of an early medieval temple in the town, theKoodalmanikyam Temple, is worshiped as Sangameswara meaning the Lord of the Samgama and so Samgamagrama can be interpreted as the village of Samgameswara. But there are several places inKarnatakawithsamgamaor its equivalentkūḍalain their names and with a temple dedicated to Samgamḗsvara, the lord of the confluence. (KudalasangamainBagalkot districtis one such place with a celebrated temple dedicated to the Lord of the Samgama.)[5] There is a small town on the southern banks of the Nila river, around 10 kilometers upstream fromTirunavaya, called Kūḍallūr. The exact literal Sanskrit translation of this place name is Samgamagram:kūṭalin Malayalam means a confluence (which in Sanskrit issamgama) andūrmeans a village (which in Sanskrit isgrama). Also the place is at the confluence of the Nila river and its most important tributary, namely, the Kunti river. (There is no confluence of rivers near Irinjalakuada.) Incidentally there is still existing aNambudiri(Malayali Brahmin) family by nameKūtallūr Manaa few kilometers away from the Kudallur village. The family has its origins in Kudallur village itself. For many generations this family hosted a greatGurukulamspecialising inVedanga.[5]That the only available manuscript ofSphuṭacandrāpti, a book authored by Madhava, was obtained from the manuscript collection ofKūtallūr Manamight strengthen the conjecture that Madhava might have had some association withKūtallūr Mana.[7]Thus the most plausible possibility is that the forefathers of Madhava migrated from the Tulu land or thereabouts to settle in Kudallur village, which is situated on the southern banks of the Nila river not far from Tirunnavaya, a generation or two before his birth and lived in a house known asIlaññippaḷḷiwhose present identity is unknown.[5] There are also no definite evidences to pinpoint the period during which Madhava flourished. In his Venvaroha, Madhava gives a date in 1400 CE as the epoch. Madhava's pupilParameshvara Nambudiri, the only known direct pupil of Madhava, is known to have completed his seminal workDrigganitain 1430 and the Paramesvara's date has been determined asc.1360-1455. From such circumstantial evidences historians have assigned the datec.1340– c.1425to Madhava. Although there is some evidence of mathematical work in Kerala prior to Madhava (e.g.,Sadratnamala[which?]c. 1300, a set of fragmentary results[8]), it is clear from citations that Madhava provided the creative impulse for the development of a rich mathematical tradition in medieval Kerala. However, except for a couple, most of Madhava's original works have been lost. He is referred to in the work of subsequent Kerala mathematicians, particularly inNilakantha Somayaji'sTantrasangraha(c. 1500), as the source for several infinite series expansions, including sinθand arctanθ. The 16th-century textMahajyānayana prakāra(Method of Computing Great Sines) cites Madhava as the source for several series derivations forπ. InJyeṣṭhadeva'sYuktibhāṣā(c. 1530),[9]written inMalayalam, these series are presented with proofs in terms of theTaylor seriesexpansions for polynomials like 1/(1+x2), withx= tanθ, etc. Thus, what is explicitly Madhava's work is a source of some debate. TheYukti-dipika(also called theTantrasangraha-vyakhya), possibly composed bySankara Variar, a student of Jyeṣṭhadeva, presents several versions of the series expansions for sinθ, cosθ, and arctanθ, as well as some products with radius and arclength, most versions of which appear in Yuktibhāṣā. For those that do not, Rajagopal and Rangachari have argued, quoting extensively from the original Sanskrit,[1]that since some of these have been attributed by Nilakantha to Madhava, some of the other forms might also be the work of Madhava. Others have speculated that the early textKaranapaddhati(c. 1375–1475), or theMahajyānayana prakārawas written by Madhava, but this is unlikely.[3] Karanapaddhati, along with the even earlier Keralite mathematics textSadratnamala, as well as theTantrasangrahaandYuktibhāṣā, were considered in an 1834 article byC. M. Whish, which was the first to draw attention to their priority over Newton in discovering theFluxion(Newton's name for differentials).[8]In the mid-20th century, the Russian scholar Jushkevich revisited the legacy of Madhava,[10]and a comprehensive look at the Kerala school was provided by Sarma in 1972.[11] There are several known astronomers who preceded Madhava, including Kǖṭalur Kizhār (2nd century),[12]Vararuci (4th century), andŚaṅkaranārāyaṇa(866 AD). It is possible that other unknown figures preceded him. However, we have a clearer record of the tradition after Madhava.Parameshvarawas a direct disciple. According to apalm leaf manuscriptof a Malayalam commentary on theSurya Siddhanta, Parameswara's son Damodara (c. 1400–1500) had Nilakantha Somayaji as one of his disciples. Jyeshtadeva was a disciple of Nilakantha.Achyutha Pisharadiof Trikkantiyur is mentioned as a disciple of Jyeṣṭhadeva, and the grammarianMelpathur Narayana Bhattathirias his disciple.[9] If we consider mathematics as a progression from finite processes of algebra to considerations of the infinite, then the first steps towards this transition typically come with infinite series expansions. It is this transition to the infinite series that is attributed to Madhava. In Europe, the first such series were developed byJames Gregoryin 1667. Madhava's work is notable for the series, but what is truly remarkable is his estimate of an error term (or correction term).[13]This implies that he understood very well the limit nature of the infinite series. Thus, Madhava may have invented the ideas underlyinginfinite seriesexpansions of functions,power series,trigonometric series, and rational approximations of infinite series.[14] However, as stated above, which results are precisely Madhava's and which are those of his successors is difficult to determine. The following presents a summary of results that have been attributed to Madhava by various scholars. Among his many contributions, he discovered infinite series for thetrigonometric functionsofsine,cosine,arctangent, and many methods for calculating thecircumferenceof acircle. One of Madhava's series is known from the textYuktibhāṣā, which contains the derivation and proof of thepower seriesforinverse tangent, discovered by Madhava.[15]In the text,Jyeṣṭhadevadescribes the series in the following manner: The first term is the product of the given sine and radius of the desired arc divided by the cosine of the arc. The succeeding terms are obtained by a process of iteration when the first term is repeatedly multiplied by the square of the sine and divided by the square of the cosine. All the terms are then divided by the odd numbers 1, 3, 5, .... The arc is obtained by adding and subtracting respectively the terms of odd rank and those of even rank. It is laid down that the sine of the arc or that of its complement whichever is the smaller should be taken here as the given sine. Otherwise the terms obtained by this above iteration will not tend to the vanishing magnitude.[16] This yields: or equivalently: This series isGregory's series(named afterJames Gregory, who rediscovered it three centuries after Madhava). Even if we consider this particular series as the work ofJyeṣṭhadeva, it would pre-date Gregory by a century, and certainly other infinite series of a similar nature had been worked out by Madhava. Today, it is referred to as theMadhava-Gregory-Leibniz series.[16][17] Madhava composed an accurate table of sines. Madhava's values are accurate to the seventh decimal place. Marking a quarter circle at twenty-four equal intervals, he gave the lengths of the half-chord (sines) corresponding to each of them. It is believed that he may have computed these values based on the series expansions:[18] Madhava's work on the value of the mathematicalconstant Piis cited in theMahajyānayana prakāra("Methods for the great sines").[citation needed]While some scholars such as Sarma[9]feel that this book may have been composed by Madhava himself, it is more likely the work of a 16th-century successor.[18]This text attributes most of the expansions to Madhava, and gives the followinginfinite seriesexpansion ofπ, now known as theMadhava-Leibniz series:[19][20] which he obtained from the power-series expansion of the arc-tangent function. However, what is most impressive is that he also gave a correction termRnfor the error after computing the sum up tonterms,[18]namely: where the third correction leads to highly accurate computations ofπ. It has long been speculated how Madhava found these correction terms.[21]They are the first three convergents of a finite continued fraction, which, when combined with the original Madhava's series evaluated tonterms, yields about 3n/2 correct digits: The absolute value of the correction term in next higher order is He also gave a more rapidly converging series by transforming the original infinite series ofπ, obtaining the infinite series By using the first 21 terms to compute an approximation ofπ, he obtains a value correct to 11 decimal places (3.14159265359).[22]The value of 3.1415926535898, correct to 13 decimals, is sometimes attributed to Madhava,[23]but may be due to one of his followers. These were the most accurate approximations ofπgiven since the 5th century (seeHistory of numerical approximations ofπ). The textSadratnamalaappears to give the astonishingly accurate value ofπ= 3.14159265358979324 (correct to 17 decimal places). Based on this, R. Gupta has suggested that this text was also composed by Madhava.[3][22] Madhava also carried out investigations into other series for arc lengths and the associated approximations to rational fractions ofπ.[3] Madhava developed thepower seriesexpansion for some trigonometry functions which were further developed by his successors at theKerala school of astronomy and mathematics.[24](Certain ideas of calculus were known toearlier mathematicians.) Madhava also extended some results found in earlier works, including those ofBhāskara II.[24]However, they did not combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, or turn calculus into the powerful problem-solving tool we have today.[25] K. V. Sarmahas identified Madhava as the author of the following works:[26][27] The Kerala school of astronomy and mathematics, founded by Madhava, flourished between the 14th and 16th centuries, and included among its membersParameshvara,Neelakanta Somayaji,Jyeshtadeva,Achyuta Pisharati,Melpathur Narayana Bhattathiriand Achyuta Panikkar. The group is known for series expansion of three trigonometric functions of sine, cosine and arctant and proofs of their results where later given in theYuktibhasa.[8][24][25]The group also did much other work in astronomy: more pages are devoted to astronomical computations than purely mathematical results.[9] The Kerala school also contributed to linguistics (the relation between language and mathematics is an ancient Indian tradition, seeKātyāyana). Theayurvedicand poetic traditions ofKeralacan be traced back to this school. The famous poem,Narayaniyam, was composed byNarayana Bhattathiri. Madhava has been called "the greatest mathematician-astronomer of medieval India",[3]some of his discoveries in this field show him to have possessed extraordinary intuition".[29]O'Connor and Robertson state that a fair assessment of Madhava is that he took the decisive step towards modern classical analysis.[18] The Kerala school was well known in the 15th and 16th centuries, in the period of the first contact with European navigators in theMalabar Coast. At the time, the port ofMuziris, nearSangamagrama, was a major center for maritime trade, and a number ofJesuitmissionaries and traders were active in this region. Given the fame of the Kerala school, and the interest shown by some of the Jesuit groups during this period in local scholarship, some scholars, including G. Joseph of the U. Manchester have suggested[30]that the writings of the Kerala school may have also been transmitted to Europe around this time, which was still about a century before Newton.[31]However, there is no direct evidence by way of relevant manuscripts that such a transmission actually took place.[31]According toDavid Bressoud, "there is no evidence that the Indian work of series was known beyond India, or even outside of Kerala, until the nineteenth century."[32]
https://en.wikipedia.org/wiki/Madhava_of_Sangamagrama
Atensor product network, inartificial neural networks, is a network that exploits the properties oftensorsto modelassociativeconcepts such asvariableassignment.Orthonormal vectorsare chosen to model the ideas (such as variable names and target assignments), and the tensor product of thesevectorsconstruct a network whose mathematical properties allow the user to easily extract the association from it. This science article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Tensor_product_network
Incomputer scienceandinformation theory,data differencingordifferential compressionis producing a technical description of the difference between two sets of data – a source and a target. Formally, a data differencing algorithm takes as input source data and target data, and produces difference data such that given the source data and the difference data, one can reconstruct the target data ("patching" the source with the difference to produce the target). One of the best-known examples of data differencing is thediffutility, which produces line-by-line differences oftext files(and in some implementations,binary files, thus being a general-purpose differencing tool). Differencing of general binary files goes under the rubric ofdelta encoding, with a widely used example being the algorithm used inrsync. A standardized generic differencing format isVCDIFF, implemented in such utilities asXdeltaversion 3. A high-efficiency (small patch files) differencing program is bsdiff, which usesbzip2as a final compression step on the generated delta.[1] Main concerns for data differencing areusabilityandspace efficiency(patch size). If one simply wishes to reconstruct the target given the source and patch, one may simply include the entire target in the patch and "apply" the patch by discarding the source and outputting the target that has been included in the patch; similarly, if the source and target have the same size one may create a simple patch byXORingsource and target. In both these cases, the patch will be as large as the target. As these examples show, if the only concern is reconstruction of target, this is easily done, at the expense of a large patch, and the main concern for general-purpose binary differencing is reducing the patch size. For structured data especially, one has other concerns, which largely fall under "usability" – for example, if one iscomparingtwo documents, one generally wishes to knowwhichsections have changed, or if some sections have been moved around – one wishes to understandhowthe documents differ. For instance "here 'cat' was changed to 'dog', and paragraph 13 was moved to paragraph 14". One may also wish to haverobustdifferences – for example, if two documents A and B differ in paragraph 13, one may wish to be able to apply this patch even if one has changed paragraph 7 of A. An example of this is in diff, which shows which lines changed, and where the context format allows robustness and improves human readability. Other concerns include computational efficiency, as for data compression – finding a small patch can be very time and memory intensive. Best results occur when one has knowledge of the data being compared and other constraints:diffis designed for line-oriented text files, particularly source code, and works best for these; thersyncalgorithm is used based on source and target being across a network from each other and communication being slow, so it minimizes data that must be transmitted; and the updates forGoogle Chromeuse an algorithm customized to the archive and executable format of the program's data.[2][3] Data compressioncan be seen as a special case of data differencing[4][5]– data differencing consists of producing adifferencegiven asourceand atarget, with patching producing atargetgiven asourceand adifference,while data compression consists of producing a compressed file given a target, and decompression consists of producing a target given only a compressed file. Thus, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a "difference from nothing". This is the same as considering absoluteentropy(corresponding to data compression) as a special case ofrelative entropy(corresponding to data differencing) with no initial data. When one wishes to emphasize the connection, one may use the termdifferential compressionto refer to data differencing. A dictionary translating between the terminology of the two fields is given as:
https://en.wikipedia.org/wiki/Data_differencing
William H. Inmon(born 1945) is an Americancomputer scientist, recognized by many as the father of thedata warehouse.[1][2]Inmon wrote the first book, held the first conference (withArnie Barnett), wrote the first column in a magazine and was the first to offer classes in data warehousing. Inmon created the accepted definition of what a data warehouse is - a subject-oriented, non-volatile, integrated, time-variant collection of data insupport of management's decisions. Compared with the approach of the other pioneering architect of data warehousing,Ralph Kimball, Inmon's approach is often characterized as a top-down approach. William H. Inmon was born July 20, 1945, inSan Diego, California. He received hisBachelor of Sciencedegree inmathematicsfromYale Universityin 1967, and hisMaster of Sciencedegree incomputer sciencefromNew Mexico State University. He worked forAmerican Management SystemsandCoopers & Lybrandbefore 1991, when he founded the company Prism Solutions, which he took public. In 1995 he founded Pine Cone Systems, which was renamed Ambeo later on. In 1999, he created a corporate information factory web site for his consulting business.[3] Inmon coined terms such as the government information factory, as well as data warehousing 2.0. Inmon promotes building, usage, and maintenance of data warehouses and related topics. His books include "Building the Data Warehouse" (1992, with later editions) and "DW 2.0: The Architecture for the Next Generation of Data Warehousing" (2008). In July 2007, Inmon was named byComputerworldas one of the ten people that most influenced the first 40 years of the computer industry.[4] Inmon's association with data warehousing stems from the fact that he wrote the first[5]book on data warehousing he held the first conference on data warehousing (with Arnie Barnett), he wrote the first column in a magazine on data warehousing, he has written over 1,000 articles on data warehousing in journals and newsletters, he created the first fold out wall chart for data warehousing and he conducted the first classes on data warehousing. In 2012, Inmon developed and made public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of TextualETL. Inmon owns and operates Forest Rim Technology, a company that applies and implements data warehousing solutions executed through textual disambiguation and TextualETL.[6] Bill Inmon has published more than 60 books in nine languages and 2,000 articles on data warehousing and data management.
https://en.wikipedia.org/wiki/Bill_Inmon
SystemVerilog,standardizedasIEEE 1800by theInstitute of Electrical and Electronics Engineers(IEEE), is ahardware descriptionandhardware verification languagecommonly used to model,design,simulate,testandimplementelectronic systems in thesemiconductorandelectronicdesign industry. SystemVerilog is an extension ofVerilog. SystemVerilog started with the donation of theSuperloglanguage toAccellerain 2002 by the startup company Co-Design Automation.[1]The bulk of the verification functionality is based on the OpenVera language donated bySynopsys. In 2005, SystemVerilog was adopted asIEEE Standard1800-2005.[2]In 2009, the standard was merged with the base Verilog (IEEE 1364-2005) standard, creating IEEE Standard 1800-2009. The SystemVerilog standard was subsequently updated in 2012,[3]2017,[4]and most recently in December 2023.[5] The feature-set of SystemVerilog can be divided into two distinct roles: The remainder of this article discusses the features of SystemVerilog not present inVerilog-2005. There are two types of data lifetime specified in SystemVerilog:staticandautomatic. Automatic variables are created the moment program execution comes to the scope of the variable. Static variables are created at the start of the program's execution and keep the same value during the entire program's lifespan, unless assigned a new value during execution. Any variable that is declared inside a task or function without specifying type will be considered automatic. To specify that a variable is static place the "static"keywordin the declaration before the type, e.g., "static int x;". The "automatic" keyword is used in the same way. Enhanced variable typesadd new capability to Verilog's "reg" type: Verilog-1995 and -2001 limit reg variables to behavioral statements such asRTL code. SystemVerilog extends the reg type so it can be driven by a single driver such as gate or module. SystemVerilog names this type "logic" to remind users that it has this extra capability and is not a hardware register. The names "logic" and "reg" are interchangeable. A signal with more than one driver (such as atri-state bufferforgeneral-purpose input/output) needs to be declared a net type such as "wire" so SystemVerilog can resolve the final value. Multidimensionalpacked arraysunify and extend Verilog's notion of "registers" and "memories": Classical Verilog permitted only one dimension to be declared to the left of the variable name. SystemVerilog permits any number of such "packed" dimensions. A variable of packed array type maps 1:1 onto an integer arithmetic quantity. In the example above, each element ofmy_packmay be used in expressions as a six-bit integer. The dimensions to the right of the name (32 in this case) are referred to as "unpacked" dimensions. As inVerilog-2001, any number of unpacked dimensions is permitted. Enumerated data types(enums) allow numeric quantities to be assigned meaningful names. Variables declared to be of enumerated type cannot be assigned to variables of a different enumerated type withoutcasting. This is not true of parameters, which were the preferred implementation technique for enumerated quantities in Verilog-2005: As shown above, the designer can specify an underlying arithmetic type (logic [2:0]in this case) which is used to represent the enumeration value. The meta-values X and Z can be used here, possibly to represent illegal states. The built-in functionname()returns an ASCII string for the current enumerated value, which is useful in validation and testing. New integer types: SystemVerilog definesbyte,shortint,intandlongintas two-state signed integral types having 8, 16, 32, and 64 bits respectively. Abittype is a variable-width two-state type that works much likelogic. Two-state types lack theXandZmetavalues of classical Verilog; working with these types may result in faster simulation. Structuresandunionswork much like they do in theC language. SystemVerilog enhancements include thepackedattribute and thetaggedattribute. Thetaggedattribute allows runtime tracking of which member(s) of a union are currently in use. Thepackedattribute causes the structure or union to be mapped 1:1 onto a packed array of bits. The contents ofstructdata types occupy a continuous block of memory with no gaps, similar tobit fieldsin C and C++: As shown in this example, SystemVerilog also supportstypedefs, as in C and C++. SystemVerilog introduces three new procedural blocks intended to modelhardware:always_comb(to modelcombinational logic),always_ff(forflip-flops), andalways_latch(forlatches). Whereas Verilog used a single, general-purposealwaysblock to model different types of hardware structures, each of SystemVerilog's new blocks is intended to model a specific type of hardware, by imposing semantic restrictions to ensure that hardware described by the blocks matches the intended usage of the model. An HDL compiler or verification program can take extra steps to ensure that only the intended type of behavior occurs. Analways_combblock modelscombinational logic. The simulator infers the sensitivity list to be all variables from the contained statements: Analways_latchblock modelslevel-sensitivelatches. Again, the sensitivity list is inferred from the code: Analways_ffblock modelssynchronous logic(especiallyedge-sensitivesequential logic): Electronic design automation(EDA) tools can verify the design's intent by checking that the hardware model does not violate any block usage semantics. For example, the new blocks restrict assignment to a variable by allowing only one source, whereas Verilog'salwaysblock permitted assignment from multiple procedural sources. For small designs, the Verilogportcompactly describes a module's connectivity with the surrounding environment. But major blocks within a large design hierarchy typically possess port counts in the thousands. SystemVerilog introduces concept ofinterfacesto both reduce the redundancy ofport-name declarationsbetween connected modules, as well as group andabstractrelated signals into a user-declared bundle. An additional concept ismodport, which shows the direction of logic connections. Example: The following verification features are typically not synthesizable, meaning they cannot be implemented in hardware based on HDL code. Instead, they assist in the creation of extensible, flexibletest benches. Thestringdata type represents a variable-length textstring. For example: In addition to the static array used in design, SystemVerilog offersdynamic arrays,associative arraysandqueues: A dynamic array works much like an unpacked array, but offers the advantage of beingdynamically allocatedatruntime(as shown above.) Whereas a packed array's size must be known at compile time (from a constant or expression of constants), the dynamic array size can be initialized from another runtime variable, allowing the array to be sized and resize arbitrarily as needed. An associative array can be thought of as abinary search treewith auser-specifiedkey type and data type. The key implies anordering; the elements of an associative array can be read out in lexicographic order. Finally, a queue provides much of the functionality of theC++ STLdequetype: elements can be added and removed from either end efficiently. These primitives allow the creation of complex data structures required forscoreboardinga large design. SystemVerilog provides anobject-oriented programmingmodel. In SystemVerilog, classes support asingle-inheritancemodel, but may implement functionality similar to multiple-inheritance through the use of so-called "interface classes" (identical in concept to theinterfacefeature of Java). Classescan be parameterized by type, providing the basic function ofC++ templates. However,template specializationandfunction templatesare not supported. SystemVerilog'spolymorphismfeatures are similar to those of C++: the programmer may specifically write avirtualfunction to have a derived class gain control of the function. Seevirtual functionfor further information. Encapsulationanddata hidingis accomplished using thelocalandprotectedkeywords, which must be applied to any item that is to be hidden. By default, all class properties arepublic. Class instances are dynamically created with thenewkeyword. Aconstructordenoted byfunction newcan be defined. SystemVerilog has automaticgarbage collection, so there is no language facility to explicitly destroy instances created by thenew operator. Example: Integer quantities, defined either in a class definition or as stand-alone variables in some lexical scope, can beassigned random valuesbased on a set of constraints. This feature is useful for creatingrandomized scenarios for verification. Within class definitions, therandandrandcmodifiers signal variables that are to undergo randomization.randcspecifiespermutation-based randomization, where a variable will take on all possible values once before any value is repeated. Variables without modifiers are not randomized. In this example, thefcsfield is not randomized; in practice it will be computed with a CRC generator, and thefcs_corruptfield used to corrupt it to inject FCS errors. The two constraints shown are applicable to conformingEthernet frames. Constraints may be selectively enabled; this feature would be required in the example above to generate corrupt frames. Constraints may be arbitrarily complex, involving interrelationships among variables, implications, and iteration. The SystemVerilogconstraint solveris required to find a solution if one exists, but makes no guarantees as to the time it will require to do so as this is in general anNP-hardproblem (boolean satisfiability). In each SystemVerilog class there are 3 predefined methods for randomization: pre_randomize, randomize and post_randomize. The randomize method is called by the user for randomization of the class variables. The pre_randomize method is called by the randomize method before the randomization and the post_randomize method is called by the randomize method after randomization. The constraint_mode() and the random_mode() methods are used to control the randomization. constraint_mode() is used to turn a specific constraint on and off and the random_mode is used to turn a randomization of a specific variable on or off. The below code describes and procedurally tests anEthernet frame: Assertionsare useful for verifying properties of a design that manifest themselves after a specific condition or state is reached. SystemVerilog has its own assertion specification language, similar toProperty Specification Language. The subset of SystemVerilog language constructs that serves assertion is commonly called SystemVerilog Assertion or SVA.[6] SystemVerilog assertions are built fromsequencesandproperties. Properties are a superset of sequences; any sequence may be used as if it were a property, although this is not typically useful. Sequences consist ofboolean expressionsaugmented withtemporal operators. The simplest temporal operator is the##operator which performs a concatenation:[clarification needed] This sequence matches if thegntsignal goes high one clock cycle afterreqgoes high. Note that all sequence operations are synchronous to a clock. Other sequential operators include repetition operators, as well as various conjunctions. These operators allow the designer to express complex relationships among design components. An assertion works by continually attempting to evaluate a sequence or property. An assertion fails if the property fails. The sequence above will fail wheneverreqis low. To accurately express the requirement thatgntfollowreqa property is required: This example shows animplicationoperator|=>. The clause to the left of the implication is called theantecedentand the clause to the right is called theconsequent.Evaluationof an implication starts through repeated attempts to evaluate the antecedent.When the antecedent succeeds, the consequent is attempted, and the success of the assertion depends on the success of the consequent. In this example, the consequent won't be attempted untilreqgoes high, after which the property will fail ifgntis not high on the following clock. In addition to assertions, SystemVerilog supportsassumptionsand coverage of properties. An assumption establishes a condition that aformal logicproving toolmust assume to be true. An assertion specifies a property that must be proven true. Insimulation, both assertions and assumptions are verified against test stimuli. Property coverage allows the verification engineer to verify that assertions are accurately monitoring the design.[vague] Coverageas applied to hardware verification languages refers to the collection of statistics based on sampling events within the simulation. Coverage is used to determine when thedevice under test(DUT) has been exposed to a sufficient variety of stimuli that there is a high confidence that the DUT is functioning correctly. Note that this differs fromcode coveragewhich instruments the design code to ensure that all lines of code in the design have been executed. Functional coverage ensures that all desiredcornerandedge casesin thedesign spacehave beenexplored. A SystemVerilog coverage group creates a database of "bins" that store ahistogramof values of an associated variable. Cross-coverage can also be defined, which creates a histogram representing theCartesian productof multiple variables. Asamplingevent controls when a sample is taken. Thesamplingevent can be a Verilog event, the entry or exit of a block of code, or a call to thesamplemethod of the coverage group. Care is required to ensure that data are sampled only when meaningful. For example: In this example, the verification engineer is interested in the distribution of broadcast and unicast frames, the size/f_type field and the payload size. The ranges in the payload size coverpoint reflect the interesting corner cases, including minimum and maximum size frames. A complex test environment consists of reusable verification components that must communicate with one another. Verilog's 'event' primitive allowed different blocks of procedural statements to trigger each other, but enforcing threadsynchronizationwas up to the programmer's (clever) usage. SystemVerilog offers twoprimitivesspecifically for interthread synchronization:mailboxandsemaphore. The mailbox is modeled as aFIFOmessage queue. Optionally, the FIFO can betype-parameterizedso thatonly objects of the specified typemay be passed through it. Typically, objects areclass instancesrepresentingtransactions: elementary operations (for example, sending a frame) that are executed by the verification components. The semaphore is modeled as acounting semaphore. In addition to the new features above, SystemVerilog enhances the usability of Verilog's existing language features. The following are some of these enhancements: Besides this, SystemVerilog allows convenientinterface to foreign languages(like C/C++), bySystemVerilog DPI(Direct Programming Interface). In the design verification role, SystemVerilog is widely used in the chip-design industry. The three largest EDA vendors (Cadence Design Systems,Mentor Graphics,Synopsys) have incorporated SystemVerilog into their mixed-languageHDL simulators. Although no simulator can yet claim support for the entire SystemVerilog Language Reference Manual, making testbenchinteroperabilitya challenge, efforts to promote cross-vendor compatibility are underway.[when?]In 2008, Cadence and Mentor released the Open Verification Methodology, an open-source class-library and usage-framework to facilitate the development of re-usable testbenches and canned verification-IP. Synopsys, which had been the first to publish a SystemVerilog class-library (VMM), subsequently responded by opening its proprietary VMM to the general public. Many third-party providers have announced or already released SystemVerilog verification IP. In thedesign synthesisrole (transformation of a hardware-design description into a gate-netlist), SystemVerilog adoption has been slow. Many design teams use design flows which involve multiple tools from different vendors. Most design teams cannot migrate to SystemVerilog RTL-design until their entire front-end tool suite (linters,formal verificationandautomated test structure generators) support a common language subset.[needs update?]
https://en.wikipedia.org/wiki/SystemVerilog#Constrained_random_generation
Combinatory logicis a notation to eliminate the need forquantifiedvariables inmathematical logic. It was introduced byMoses Schönfinkel[1]andHaskell Curry,[2]and has more recently been used incomputer scienceas a theoreticalmodel of computationand also as a basis for the design offunctional programming languages. It is based oncombinators, which were introduced bySchönfinkelin 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly inpredicate logic. A combinator is ahigher-order functionthat uses onlyfunction applicationand earlier defined combinators to define a result from its arguments. Combinatory logic was originally intended as a 'pre-logic' that would clarify the role ofquantified variablesin logic, essentially by eliminating them. Another way of eliminating quantified variables isQuine'spredicate functor logic. While theexpressive powerof combinatory logic typically exceeds that offirst-order logic, the expressive power ofpredicate functor logicis identical to that of first order logic (Quine 1960, 1966, 1976). The original inventor of combinatory logic,Moses Schönfinkel, published nothing on combinatory logic after his original 1924 paper.Haskell Curryrediscovered the combinators while working as an instructor atPrinceton Universityin late 1927.[3]In the late 1930s,Alonzo Churchand his students at Princeton invented a rival formalism for functional abstraction, thelambda calculus, which proved more popular than combinatory logic. The upshot of these historical contingencies was that until theoretical computer science began taking an interest in combinatory logic in the 1960s and 1970s, nearly all work on the subject was byHaskell Curryand his students, or byRobert FeysinBelgium. Curry and Feys (1958), and Curryet al.(1972) survey the early history of combinatory logic. For a more modern treatment of combinatory logic and the lambda calculus together, see the book byBarendregt,[4]which reviews themodelsDana Scottdevised for combinatory logic in the 1960s and 1970s. Incomputer science, combinatory logic is used as a simplified model ofcomputation, used incomputability theoryandproof theory. Despite its simplicity, combinatory logic captures many essential features of computation. Combinatory logic can be viewed as a variant of thelambda calculus, in which lambda expressions (representing functional abstraction) are replaced by a limited set ofcombinators, primitive functions withoutfree variables. It is easy to transform lambda expressions into combinator expressions, and combinator reduction is much simpler than lambda reduction. Hence combinatory logic has been used to model somenon-strictfunctional programminglanguages andhardware. The purest form of this view is the programming languageUnlambda, whose sole primitives are the S and K combinators augmented with character input/output. Although not a practical programming language, Unlambda is of some theoretical interest. Combinatory logic can be given a variety of interpretations. Many early papers by Curry showed how to translate axiom sets for conventional logic into combinatory logic equations.[5]Dana Scottin the 1960s and 1970s showed how to marrymodel theoryand combinatory logic. Lambda calculus is concerned with objects calledlambda-terms, which can be represented by the following three forms of strings: where⁠v{\displaystyle v}⁠is a variable name drawn from a predefined infinite set of variable names, and⁠E1{\displaystyle E_{1}}⁠and⁠E2{\displaystyle E_{2}}⁠are lambda-terms. Terms of the form⁠λv.E1{\displaystyle \lambda v.E_{1}}⁠are calledabstractions. The variablevis called theformal parameterof the abstraction, and⁠E1{\displaystyle E_{1}}⁠is thebodyof the abstraction. The term⁠λv.E1{\displaystyle \lambda v.E_{1}}⁠represents the function which, applied to an argument, binds the formal parametervto the argument and then computes the resulting value of⁠E1{\displaystyle E_{1}}⁠— that is, it returns⁠E1{\displaystyle E_{1}}⁠, with every occurrence ofvreplaced by the argument. Terms of the form⁠(E1E2){\displaystyle (E_{1}E_{2})}⁠are calledapplications. Applications model function invocation or execution: the function represented by⁠E1{\displaystyle E_{1}}⁠is to be invoked, with⁠E2{\displaystyle E_{2}}⁠as its argument, and the result is computed. If⁠E1{\displaystyle E_{1}}⁠(sometimes called theapplicand) is an abstraction, the term may bereduced:⁠E2{\displaystyle E_{2}}⁠, the argument, may be substituted into the body of⁠E1{\displaystyle E_{1}}⁠in place of the formal parameter of⁠E1{\displaystyle E_{1}}⁠, and the result is a new lambda term which isequivalentto the old one. If a lambda term contains no subterms of the form⁠((λv.E1)E2){\displaystyle ((\lambda v.E_{1})E_{2})}⁠then it cannot be reduced, and is said to be innormal form. The expression⁠E[v:=a]{\displaystyle E[v:=a]}⁠represents the result of taking the termEand replacing all free occurrences ofvin it witha. Thus we write By convention, we take⁠(abc){\displaystyle (abc)}⁠as shorthand for⁠((ab)c){\displaystyle ((ab)c)}⁠(i.e., application isleft associative). The motivation for this definition of reduction is that it captures the essential behavior of all mathematical functions. For example, consider the function that computes the square of a number. We might write (Using "⁠∗{\displaystyle *}⁠" to indicate multiplication.)xhere is theformal parameterof the function. To evaluate the square for a particular argument, say 3, we insert it into the definition in place of the formal parameter: To evaluate the resulting expression⁠3∗3{\displaystyle 3*3}⁠, we would have to resort to our knowledge of multiplication and the number 3. Since any computation is simply a composition of the evaluation of suitable functions on suitable primitive arguments, this simple substitution principle suffices to capture the essential mechanism of computation. Moreover, in lambda calculus, notions such as '3' and '⁠∗{\displaystyle *}⁠' can be represented without any need for externally defined primitive operators or constants. It is possible to identify terms in lambda calculus, which, when suitably interpreted, behave like the number 3 and like the multiplication operator, q.v.Church encoding. Lambda calculus is known to be computationally equivalent in power to many other plausible models for computation (includingTuring machines); that is, any calculation that can be accomplished in any of these other models can be expressed in lambda calculus, and vice versa. According to theChurch–Turing thesis, both models can express any possible computation. It is perhaps surprising that lambda-calculus can represent any conceivable computation using only the simple notions of function abstraction and application based on simple textual substitution of terms for variables. But even more remarkable is that abstraction is not even required.Combinatory logicis a model of computation equivalent to lambda calculus, but without abstraction. The advantage of this is that evaluating expressions in lambda calculus is quite complicated because the semantics of substitution must be specified with great care to avoid variable capture problems. In contrast, evaluating expressions in combinatory logic is much simpler, because there is no notion of substitution. Since abstraction is the only way to manufacture functions in the lambda calculus, something must replace it in the combinatory calculus. Instead of abstraction, combinatory calculus provides a limited set of primitive functions out of which other functions may be built. A combinatory term has one of the following forms: The primitive functions arecombinators, or functions that, when seen as lambda terms, contain nofree variables. To shorten the notations, a general convention is that⁠(E1E2E3...En){\displaystyle (E_{1}E_{2}E_{3}...E_{n})}⁠, or even⁠E1E2E3...En{\displaystyle E_{1}E_{2}E_{3}...E_{n}}⁠, denotes the term⁠(...((E1E2)E3)...En){\displaystyle (...((E_{1}E_{2})E_{3})...E_{n})}⁠. This is the same general convention (left-associativity) as for multiple application in lambda calculus. In combinatory logic, each primitive combinator comes with a reduction rule of the form whereEis a term mentioning only variables from the set{x1...xn}. It is in this way that primitive combinators behave as functions. The simplest example of a combinator isI, the identity combinator, defined by for all termsx. Another simple combinator isK, which manufactures constant functions: (Kx) is the function which, for any argument, returnsx, so we say for all termsxandy. Or, following the convention for multiple application, A third combinator isS, which is a generalized version of application: Sappliesxtoyafter first substitutingzinto each of them. Or put another way,xis applied toyinside the environmentz. GivenSandK,Iitself is unnecessary, since it can be built from the other two: for any termx. Note that although ((S K K)x) = (Ix) for anyx, (S K K) itself is not equal toI. We say the terms areextensionally equal. Extensional equality captures the mathematical notion of the equality of functions: that two functions areequalif they always produce the same results for the same arguments. In contrast, the terms themselves, together with the reduction of primitive combinators, capture the notion ofintensional equalityof functions: that two functions areequalonly if they have identical implementationsup tothe expansion of primitive combinators. There are many ways to implement an identity function; (S K K) andIare among these ways. (S K S) is yet another. We will use the wordequivalentto indicate extensional equality, reservingequalfor identical combinatorial terms. A more interesting combinator is thefixed point combinatororYcombinator, which can be used to implementrecursion. SandKcan be composed to produce combinators that are extensionally equal toanylambda term, and therefore, by Church's thesis, to any computable function whatsoever. The proof is to present a transformation,T[ ], which converts an arbitrary lambda term into an equivalent combinator. T[ ]may be defined as follows: Note thatT[ ] as given is not a well-typed mathematical function, but rather a term rewriter: Although it eventually yields a combinator, the transformation may generate intermediary expressions that are neither lambda terms nor combinators, via rule (5). This process is also known asabstraction elimination. This definition is exhaustive: any lambda expression will be subject to exactly one of these rules (seeSummary of lambda calculusabove). It is related to the process ofbracket abstraction, which takes an expressionEbuilt from variables and application and produces a combinator expression [x]E in which the variable x is not free, such that [x]E x=Eholds. A very simple algorithm for bracket abstraction is defined by induction on the structure of expressions as follows:[6] Bracket abstraction induces a translation from lambda terms to combinator expressions, by interpreting lambda-abstractions using the bracket abstraction algorithm. For example, we will convert the lambda termλx.λy.(yx) to a combinatorial term: If we apply this combinatorial term to any two termsxandy(by feeding them in a queue-like fashion into the combinator 'from the right'), it reduces as follows: The combinatory representation, (S(K(S I)) (S(K K)I)) is much longer than the representation as a lambda term,λx.λy.(y x). This is typical. In general, theT[ ] construction may expand a lambda term of lengthnto a combinatorial term of lengthΘ(n3).[7] TheT[ ] transformation is motivated by a desire to eliminate abstraction. Two special cases, rules 3 and 4, are trivial:λx.xis clearly equivalent toI, andλx.Eis clearly equivalent to (KT[E]) ifxdoes not appear free inE. The first two rules are also simple: Variables convert to themselves, and applications, which are allowed in combinatory terms, are converted to combinators simply by converting the applicand and the argument to combinators. It is rules 5 and 6 that are of interest. Rule 5 simply says that to convert a complex abstraction to a combinator, we must first convert its body to a combinator, and then eliminate the abstraction. Rule 6 actually eliminates the abstraction. λx.(E1E2) is a function which takes an argument, saya, and substitutes it into the lambda term (E1E2) in place ofx, yielding (E1E2)[x: =a]. But substitutingainto (E1E2) in place ofxis just the same as substituting it into bothE1andE2, so By extensional equality, Therefore, to find a combinator equivalent toλx.(E1E2), it is sufficient to find a combinator equivalent to (Sλx.E1λx.E2), and evidently fits the bill.E1andE2each contain strictly fewer applications than (E1E2), so the recursion must terminate in a lambda term with no applications at all—either a variable, or a term of the formλx.E. The combinators generated by theT[ ] transformation can be made smaller if we take into account theη-reductionrule: λx.(Ex) is the function which takes an argument,x, and applies the functionEto it; this is extensionally equal to the functionEitself. It is therefore sufficient to convertEto combinatorial form. Taking this simplification into account, the example above becomes: This combinator is equivalent to the earlier, longer one: Similarly, the original version of theT[ ] transformation transformed the identity functionλf.λx.(fx) into (S(S(K S) (S(K K)I)) (K I)). With the η-reduction rule,λf.λx.(fx) is transformed intoI. There are one-point bases from which every combinator can be composed extensionally equal toanylambda term. A simple example of such a basis is {X} where: It is not difficult to verify that: Since {K,S} is a basis, it follows that {X} is a basis too. TheIotaprogramming language usesXas its sole combinator. Another simple example of a one-point basis is: The simplest known one-point basis is a slight modification ofS: In fact, there exist infinitely many such bases.[8] In addition toSandK,Schönfinkel (1924)included two combinators which are now calledBandC, with the following reductions: He also explains how they in turn can be expressed using onlySandK: These combinators are extremely useful when translating predicate logic or lambda calculus into combinator expressions. They were also used byCurry, and much later byDavid Turner, whose name has been associated with their computational use. Using them, we can extend the rules for the transformation as follows: UsingBandCcombinators, the transformation ofλx.λy.(yx) looks like this: And indeed, (CIxy) does reduce to (yx): The motivation here is thatBandCare limited versions ofS. WhereasStakes a value and substitutes it into both the applicand and its argument before performing the application,Cperforms the substitution only in the applicand, andBonly in the argument. The modern names for the combinators come fromHaskell Curry's doctoral thesis of 1930 (seeB, C, K, W System). InSchönfinkel's original paper, what we now callS,K,I,BandCwere calledS,C,I,Z, andTrespectively. The reduction in combinator size that results from the new transformation rules can also be achieved without introducingBandC, as demonstrated in Section 3.2 ofTromp (2008). A distinction must be made between theCLKas described in this article and theCLIcalculus. The distinction corresponds to that between the λKand the λIcalculus. Unlike the λKcalculus, the λIcalculus restricts abstractions to: As a consequence, combinatorKis not present in the λIcalculus nor in theCLIcalculus. The constants ofCLIare:I,B,CandS, which form a basis from which allCLIterms can be composed (modulo equality). Every λIterm can be converted into an equalCLIcombinator according to rules similar to those presented above for the conversion of λKterms intoCLKcombinators. See chapter 9 in Barendregt (1984). The conversionL[ ] from combinatorial terms to lambda terms is trivial: Note, however, that this transformation is not the inverse transformation of any of the versions ofT[ ] that we have seen. Anormal formis any combinatory term in which the primitive combinators that occur, if any, are not applied to enough arguments to be simplified. It is undecidable whether a general combinatory term has a normal form; whether two combinatory terms are equivalent, etc. This can be shown in a similar way as for the corresponding problems for lambda terms. The undecidable problems above (equivalence, existence of normal form, etc.) take as input syntactic representations of terms under a suitable encoding (e.g.,Church encoding). One may also consider a toy trivial computation model where we "compute" properties of terms by means of combinators applied directly to the terms themselves as arguments, rather than to their syntactic representations. More precisely, let apredicatebe a combinator that, when applied, returns eitherTorF(whereTandFrepresent the conventionalChurch encodings of true and false,λx.λy.xandλx.λy.y, transformed into combinatory logic; the combinatory versions haveT=KandF= (KI)). A predicateNisnontrivialif there are two argumentsAandBsuch thatNA=TandNB=F. A combinatorNiscompleteifNMhas a normal form for every argumentM. An analogue of Rice's theorem for this toy model then says that every complete predicate is trivial. The proof of this theorem is rather simple.[9] By reductio ad absurdum. Suppose there is a complete non trivial predicate, sayN. BecauseNis supposed to be non trivial there are combinatorsAandBsuch that Fixed point theorem gives: ABSURDUM = (NEGATION ABSURDUM), for BecauseNis supposed to be complete either: Hence (NABSURDUM) is neitherTnorF, which contradicts the presupposition thatNwould be a complete non trivial predicate.Q.E.D. From this undefinability theorem it immediately follows that there is no complete predicate that can discriminate between terms that have a normal form and terms that do not have a normal form. It also follows that there isnocomplete predicate, say EQUAL, such that: If EQUAL would exist, then for allA,λx.(EQUALx A) would have to be a complete non trivial predicate. However, note that it also immediately follows from this undefinability theorem that many properties of terms that are obviously decidable are not definable by complete predicates either: e.g., there is no predicate that could tell whether the first primitive function letter occurring in a term is aK. This shows that definability by predicates is a not a reasonable model of decidability. David Turner used his combinators to implement theSASL programming language. Kenneth E. Iversonused primitives based on Curry's combinators in hisJ programming language, a successor toAPL. This enabled what Iverson calledtacit programming, that is, programming in functional expressions containing no variables, along with powerful tools for working with such programs. It turns out that tacit programming is possible in any APL-like language with user-defined operators.[10] TheCurry–Howard isomorphismimplies a connection between logic and programming: every proof of a theorem ofintuitionistic logiccorresponds to a reduction of a typed lambda term, and conversely. Moreover, theorems can be identified with function type signatures. Specifically, a typed combinatory logic corresponds to aHilbert systeminproof theory. TheKandScombinators correspond to the axioms and function application corresponds to the detachment (modus ponens) rule The calculus consisting ofAK,AS, andMPis complete for the implicational fragment of the intuitionistic logic, which can be seen as follows. Consider the setWof all deductively closed sets of formulas, ordered byinclusion. Then⟨W,⊆⟩{\displaystyle \langle W,\subseteq \rangle }is an intuitionisticKripke frame, and we define a model⊩{\displaystyle \Vdash }in this frame by This definition obeys the conditions on satisfaction of →: on one hand, ifX⊩A→B{\displaystyle X\Vdash A\to B}, andY∈W{\displaystyle Y\in W}is such thatY⊇X{\displaystyle Y\supseteq X}andY⊩A{\displaystyle Y\Vdash A}, thenY⊩B{\displaystyle Y\Vdash B}by modus ponens. On the other hand, ifX⊮A→B{\displaystyle X\not \Vdash A\to B}, thenX,A⊬B{\displaystyle X,A\not \vdash B}by thededuction theorem, thus the deductive closure ofX∪{A}{\displaystyle X\cup \{A\}}is an elementY∈W{\displaystyle Y\in W}such thatY⊇X{\displaystyle Y\supseteq X},Y⊩A{\displaystyle Y\Vdash A}, andY⊮B{\displaystyle Y\not \Vdash B}. LetAbe any formula which is not provable in the calculus. ThenAdoes not belong to the deductive closureXof the empty set, thusX⊮A{\displaystyle X\not \Vdash A}, andAis not intuitionistically valid.
https://en.wikipedia.org/wiki/Combinatory_logic
Finitismis aphilosophy of mathematicsthat accepts the existence only offinitemathematical objects. It is best understood in comparison to the mainstream philosophy of mathematics where infinite mathematical objects (e.g.,infinite sets) are accepted as existing. The main idea of finitistic mathematics is not accepting the existence of infinite objects such as infinite sets. While allnatural numbersare accepted as existing, thesetof all natural numbers is not considered to exist as a mathematical object. Thereforequantificationover infinite domains is not considered meaningful. The mathematical theory often associated with finitism isThoralf Skolem'sprimitive recursive arithmetic. The introduction of infinite mathematical objects occurred a few centuries ago when the use of infinite objects was already a controversial topic among mathematicians. The issue entered a new phase whenGeorg Cantorin 1874 introduced what is now callednaive set theoryand used it as a base for his work ontransfinite numbers. When paradoxes such asRussell's paradox,Berry's paradoxand theBurali-Forti paradoxwere discovered in Cantor's naive set theory, the issue became a heated topic among mathematicians. There were various positions taken by mathematicians. All agreed about finite mathematical objects such as natural numbers. However there were disagreements regarding infinite mathematical objects. One position was theintuitionistic mathematicsthat was advocated byL. E. J. Brouwer, which rejected the existence of infinite objects until they are constructed. Another position was endorsed byDavid Hilbert: finite mathematical objects are concrete objects, infinite mathematical objects are ideal objects, and accepting ideal mathematical objects does not cause a problem regarding finite mathematical objects. More formally, Hilbert believed that it is possible to show that any theorem about finite mathematical objects that can be obtained using ideal infinite objects can be also obtained without them. Therefore allowing infinite mathematical objects would not cause a problem regarding finite objects. This led toHilbert's programof proving bothconsistencyandcompletenessof set theory using finitistic means as this would imply that adding ideal mathematical objects isconservativeover the finitistic part. Hilbert's views are also associated with theformalist philosophy of mathematics. Hilbert's goal of proving the consistency and completeness of set theory or even arithmetic through finitistic means turned out to be an impossible task due toKurt Gödel'sincompleteness theorems. However,Harvey Friedman'sgrand conjecturewould imply that most mathematical results are provable using finitistic means. Hilbert did not give a rigorous explanation of what he considered finitistic and referred to as elementary. However, based on his work withPaul Bernayssome experts such asTait (1981)have argued thatprimitive recursive arithmeticcan be considered an upper bound on what Hilbert considered finitistic mathematics.[1] As a result of Gödel's theorems, as it became clear that there is no hope of proving both the consistency and completeness of mathematics, and with the development of seemingly consistentaxiomatic set theoriessuch asZermelo–Fraenkel set theory, most modern mathematicians do not focus on this topic. In her bookThe Philosophy of Set Theory,Mary Tilescharacterized those who allowpotentially infiniteobjects asclassical finitists, and those who do not allow potentially infinite objects asstrict finitists: for example, a classical finitist would allow statements such as "every natural number has asuccessor" and would accept the meaningfulness ofinfinite seriesin the sense oflimitsof finite partial sums, while a strict finitist would not. Historically, the written history of mathematics was thus classically finitist until Cantor created the hierarchy oftransfinitecardinalsat the end of the 19th century. Leopold Kroneckerremained a strident opponent to Cantor's set theory:[2] Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk.God created the integers; all else is the work of man. Reuben Goodsteinwas another proponent of finitism. Some of his work involved building up toanalysisfrom finitist foundations. Although he denied it, much ofLudwig Wittgenstein's writing on mathematics has a strong affinity with finitism.[4] If finitists are contrasted withtransfinitists(proponents of e.g.Georg Cantor's hierarchy of infinities), then alsoAristotlemay be characterized as a finitist. Aristotle especially promoted thepotential infinityas a middle option between strict finitism andactual infinity(the latter being an actualization of something never-ending in nature, in contrast with the Cantorist actual infinity consisting of the transfinitecardinalandordinalnumbers, which have nothing to do with the things in nature): But on the other hand to suppose that the infinite does not exist in any way leads obviously to many impossible consequences: there will be a beginning and end of time, a magnitude will not be divisible into magnitudes, number will not be infinite. If, then, in view of the above considerations, neither alternative seems possible, an arbiter must be called in. Ultrafinitism(also known as ultraintuitionism) has an even more conservative attitude towards mathematical objects than finitism, and has objections to the existence of finite mathematical objects when they are too large. Towards the end of the 20th centuryJohn Penn Mayberrydeveloped a system of finitary mathematics which he called "Euclidean Arithmetic". The most striking tenet of his system is a complete and rigorous rejection of the special foundational status normally accorded to iterative processes, including in particular the construction of the natural numbers by the iteration "+1". Consequently Mayberry is in sharp dissent from those who would seek to equate finitary mathematics withPeano arithmeticor any of its fragments such asprimitive recursive arithmetic.
https://en.wikipedia.org/wiki/Finitism
Aperipheral DMA controller(PDC) is a feature found in modernmicrocontrollers. This is typically aFIFOwith automated control features for driving implicitly included modules in a microcontroller such asUARTs. This takes a large burden from theoperating systemand reduces the number ofinterruptsrequired to service and control these type of functions. Thismicrocomputer- ormicroprocessor-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Peripheral_DMA_controller
ThePackrat parseris a type ofparserthat shares similarities with therecursive descent parserin its construction. However, it differs because it takesparsing expression grammars (PEGs)as input rather thanLL grammars.[1] In 1970, Alexander Birman laid the groundwork for packrat parsing by introducing the "TMG recognition scheme" (TS), and "generalized TS" (gTS). TS was based upon Robert M. McClure'sTMGcompiler-compiler, and gTS was based upon Dewey Val Schorre'sMETAcompiler-compiler. Birman's work was later refined by Aho and Ullman; and renamed as Top-Down Parsing Language (TDPL), and Generalized TDPL (GTDPL), respectively. These algorithms were the first of their kind to employ deterministic top-down parsing with backtracking.[2][3] Bryan Ford developed PEGs as an expansion of GTDPL and TS. UnlikeCFGs, PEGs are unambiguous and can match well with machine-oriented languages. PEGs, similar to GTDPL and TS, can also express allLL(k)andLR(k). Bryan also introduced Packrat as a parser that usesmemoizationtechniques on top of a simple PEG parser. This was done because PEGs have an unlimitedlookaheadcapability resulting in a parser withexponential timeperformance in the worst case.[2][3] Packrat keeps track of the intermediate results for all mutually recursive parsing functions. Each parsing function is only called once at a specific input position. In some instances of packrat implementation, if there is insufficient memory, certain parsing functions may need to be called multiple times at the same input position, causing the parser to take longer than linear time.[4] The packrat parser takes in input the same syntax as PEGs: a simple PEG is composed of terminal and nonterminal symbols, possibly interleaved with operators that compose one or several derivation rules.[2] αβ{\displaystyle \alpha \beta } Failure:Ifα{\displaystyle \alpha }orβ{\displaystyle \beta }are not recognized Consumed:α{\displaystyle \alpha }andβ{\displaystyle \beta }in case of success α/β/γ{\displaystyle \alpha /\beta /\gamma } Failure:All of{α,β,γ}{\displaystyle \{\alpha ,\beta ,\gamma \}}do not match Consumed:The atomic expression that has generated a success so if multiple succeed the first one is always returned &α{\displaystyle \&\alpha } Failure:Ifα{\displaystyle \alpha }is not recognized Consumed:No input is consumed !α{\displaystyle !\alpha } Failure:Ifα{\displaystyle \alpha }is recognized Consumed:No input is consumed α+{\displaystyle \alpha +} Failure:Ifα{\displaystyle \alpha }is not recognized Consumed:The maximum number thatα{\displaystyle \alpha }is recognized α∗{\displaystyle \alpha *} Failure:Cannot fail Consumed:The maximum number thatα{\displaystyle \alpha }is recognized α?{\displaystyle \alpha ?} Failure:Cannot fail Consumed:α{\displaystyle \alpha }if it is recognized [a−b{\displaystyle a-b}] Failure:If no terminal inside of[a−b]{\displaystyle [a-b]}can be recognized Consumed:c{\displaystyle c}if it is recognized .{\displaystyle .} Failure:If no character in the input Consumed:any character in the input A derivation rule is composed by a nonterminal symbol and an expressionS→α{\displaystyle S\rightarrow \alpha }. A special expressionαs{\displaystyle \alpha _{s}}is the starting point of the grammar.[2]In case noαs{\displaystyle \alpha _{s}}is specified, the first expression of the first rule is used. An input string is considered accepted by the parser if theαs{\displaystyle \alpha _{s}}is recognized. As a side-effect, a stringx{\displaystyle x}can be recognized by the parser even if it was not fully consumed.[2] An extreme case of this rule is that the grammarS→x∗{\displaystyle S\rightarrow x*}matches any string. This can be avoided by rewriting the grammar asS→x∗!.{\displaystyle S\rightarrow x*!.} {S→A/B/DA→'a'S'a'B→'b'S'b'D→('0'−'9')?{\displaystyle {\begin{cases}S\rightarrow A/B/D\\A\rightarrow {\texttt {'a'}}\ S\ {\texttt {'a'}}\\B\rightarrow {\texttt {'b'}}\ S\ {\texttt {'b'}}\\D\rightarrow ({\texttt {'0'}}-{\texttt {'9'}})?\end{cases}}} This grammar recognizes apalindromeover the alphabet{a,b}{\displaystyle \{a,b\}}, with an optional digit in the middle. Example strings accepted by the grammar include:'aa'{\displaystyle {\texttt {'aa'}}}and'aba3aba'{\displaystyle {\texttt {'aba3aba'}}}. Left recursion happens when a grammar production refers to itself as its left-most element, either directly or indirectly. Since Packrat is a recursive descent parser, it cannot handle left recursion directly.[5]During the early stages of development, it was found that a production that is left-recursive can be transformed into a right-recursive production.[6]This modification significantly simplifies the task of a Packrat parser. Nonetheless, if there is an indirect left recursion involved, the process of rewriting can be quite complex and challenging. If the time complexity requirements are loosened from linear tosuperlinear, it is possible to modify the memoization table of a Packrat parser to permit left recursion, without altering the input grammar.[5] The iterative combinatorα+{\displaystyle \alpha +},α∗{\displaystyle \alpha *}, needs special attention when used in a Packrat parser. As a matter of fact, the use of iterative combinators introduces asecretrecursion that does not record intermediate results in the outcome matrix. This can lead to the parser operating with a superlinear behaviour. This problem can be resolved apply the following transformation:[1] With this transformation, the intermediate results can be properly memoized. Memoization is anoptimizationtechnique in computing that aims to speed up programs by storing the results of expensive function calls. This technique essentially works bycachingthe results so that when the same inputs occur again, the cached result is simply returned, thus avoiding the time-consuming process of re-computing.[7]When using packrat parsing and memoization, it's noteworthy that the parsing function for each nonterminal is solely based on the input string. It does not depend on any information gathered during the parsing process. Essentially, memoization table entries do not affect or rely on the parser's specific state at any given time.[8] Packrat parsing stores results in a matrix or similar data structure that allows for quick look-ups and insertions. When a production is encountered, the matrix is checked to see if it has already occurred. If it has, the result is retrieved from the matrix. If not, the production is evaluated, the result is inserted into the matrix, and then returned.[9]When evaluating the entirem∗n{\displaystyle m*n}matrix in a tabular approach, it would requireΘ(mn){\displaystyle \Theta (mn)}space.[9]Here,m{\displaystyle m}represents the number of nonterminals, andn{\displaystyle n}represents the input string size. In a naïve implementation, the entire table can be derived from the input string starting from the end of the string. The Packrat parser can be improved to update only the necessary cells in the matrix through a depth-first visit of each subexpression tree. Consequently, using a matrix with dimensions ofm∗n{\displaystyle m*n}is often wasteful, as most entries would remain empty.[5]These cells are linked to the input string, not to the nonterminals of the grammar. This means that increasing the input string size would always increase memory consumption, while the number of parsing rules changes only the worst space complexity.[1] Another operator calledcuthas been introduced to Packrat to reduce its average space complexity even further. This operator utilizes the formal structures of many programming languages to eliminate impossible derivations. For instance, control statements parsing in a standard programming language is mutually exclusive from the first recognized token, e.g.,{if,do,while,switch}{\displaystyle \{{\mathtt {if,do,while,switch}}\}}.[10] α↑β/γ(α↑β)∗{\displaystyle {\begin{array}{l}\alpha \uparrow \beta /\gamma \\(\alpha \uparrow \beta )*\end{array}}} In the first case don't evaluateγ{\displaystyle \gamma }ifα{\displaystyle \alpha }was recognized The second rule is can be rewritten asN→α↑βN/ϵ{\displaystyle N\rightarrow \alpha \uparrow \beta N/\epsilon }and the same rules can be applied. When a Packrat parser uses cut operators, it effectively clears its backtracking stack. This is because a cut operator reduces the number of possible alternatives in an ordered choice. By adding cut operators in the right places in a grammar's definition, the resulting Packrat parser only needs a nearly constant amount of space for memoization.[10] Sketch of an implementation of a Packrat algorithm in a Lua-like pseudocode.[5] Given the following context, a free grammar that recognizes simple arithmetic expressions composed of single digits interleaved by sum, multiplication, and parenthesis. {S→AA→M'+'A/MM→P'*'M/PP→'('A')'/DD→('0'−'9'){\displaystyle {\begin{cases}S\rightarrow A\\A\rightarrow M\ {\texttt {'+'}}\ A\ /\ M\\M\rightarrow P\ {\texttt {'*'}}\ M\ /\ P\\P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ D\\D\rightarrow ({\texttt {'0'}}-{\texttt {'9'}})\end{cases}}} Denoted with ⊣ the line terminator we can apply thepackrat algorithm Backtrack to the first grammar rule with unexplored alternativeP→'('A')'/D_{\textstyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ {\underline {D}}} No update because no terminal was recognized Update: D(1) = 1; P(1) = 1; No update because no nonterminal was fully recognized Backtrack to the first grammar rule with unexplored alternativeP→'('A')'/D_{\textstyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ {\underline {D}}} No update because no terminal was recognized but the new input will not match insideM→P'*'M{\displaystyle M\rightarrow P\ {\texttt {'*'}}\ M}so an unroll is necessary toM→P'*'M/P_{\displaystyle M\rightarrow P\ {\texttt {'*'}}\ M\ /\ {\underline {P}}} Update: D(4) = 1; P(4) = 1; And we don't expand it has we have an hit in the memoization table P(4) ≠ 0 so shift the input by P(4). Shift also the+{\displaystyle +}fromA→M'+'A{\displaystyle A\rightarrow M\ {\texttt {'+'}}\ A} Hit on P(4) Update M(4) = 1 as M was recognized Backtrack to the first grammar rule with unexplored alternativeP→'('A')'/D_{\textstyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ {\underline {D}}} No update because no terminal was recognized but the new input will not match insideM→P'*'M{\displaystyle M\rightarrow P\ {\texttt {'*'}}\ M}so an unroll is necessary Update: D(6) = 1; P(6) = 1; And we don't expand it has we have an hit in the memoization table P(6) ≠ 0 so shift the input by P(6). but the new input will not match+{\displaystyle +}insideA→M'+'A{\displaystyle A\rightarrow M\ {\texttt {'+'}}\ A}so an unroll is necessary Hit on P(6) Update M(6) = 1 as M was recognized And we don't expand it has we have an hit in the memoization table M(6) ≠ 0 so shift the input by M(6). Also shift){\displaystyle )}fromP→'('A')'{\displaystyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}} Hit on M(6) Update A(4) = 3 as A was recognized Update P(3)=5 as P was recognized No update because no terminal was recognized Hit on P(3) Update M(1)=7 as M was recognized No update because no terminal was recognized S was totally reduced, so the input string is recognized. Hit on M(1) Update A(1)=7 as A was recognized Update S(1)=7 as S was recognized
https://en.wikipedia.org/wiki/Packrat_parser
ABRMSorbusiness rule management systemis asoftwaresystem used to define, deploy, execute, monitor and maintain the variety and complexity of decision logic that is used by operational systems within an organization or enterprise. This logic, also referred to asbusiness rules, includes policies, requirements, and conditional statements that are used to determine the tactical actions that take place in applications and systems. A BRMS includes, at minimum: The top benefits of a BRMS include: Some disadvantages of the BRMS include:[1] Most BRMS vendors have evolved fromrule enginevendors to provide business-usablesoftware development lifecyclesolutions, based on declarative definitions of business rules executed in their own rule engine. BRMSs are increasingly evolving into broader digital decisioning platforms that also incorporate decision intelligence andmachine learningcapabilities.[2] However, some vendors come from a different approach (for example, they map decision trees or graphs to executable code). Rules in the repository are generally mapped to decision services that are naturally fully compliant with the latestSOA,Web Services, or other software architecture trends. In a BRMS, a representation of business rules maps to a software system for execution. A BRMS therefore relates tomodel-driven engineering, such as themodel-driven architecture(MDA) of theObject Management Group(OMG). It is no coincidence that many of the related standards come under the OMG banner. A BRMS is a critical component forEnterprise Decision Managementas it allows for the transparent and agile management of the decision-making logic required in systems developed using this approach. The OMGDecision Model and Notationstandard is designed to standardize elements of business rules development, specially decision table representations. There is also a standard for a Java RuntimeAPIfor rule enginesJSR-94. Many standards, such asdomain-specific languages, define their own representation of rules, requiring translations to generic rule engines or their own custom engines. Other domains, such asPMML, also define rules.
https://en.wikipedia.org/wiki/Business_rule_management_system
Advanced Vector Extensions(AVX, also known asGesher New Instructionsand thenSandy Bridge New Instructions) areSIMDextensions to thex86instruction set architectureformicroprocessorsfromIntelandAdvanced Micro Devices(AMD). They were proposed by Intel in March 2008 and first supported by Intel with theSandy Bridge[1]microarchitecture shipping in Q1 2011 and later by AMD with theBulldozer[2]microarchitecture shipping in Q4 2011. AVX provides new features, new instructions, and a new coding scheme. AVX2(also known asHaswell New Instructions) expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with theHaswellmicroarchitecture, which shipped in 2013. AVX-512expands AVX to 512-bit support using a newEVEX prefixencoding proposed by Intel in July 2013 and first supported by Intel with theKnights Landingco-processor, which shipped in 2016.[3][4]In conventional processors, AVX-512 was introduced withSkylakeserver and HEDT processors in 2017. AVX uses sixteen YMM registers to perform a single instruction on multiple pieces of data (seeSIMD). Each YMM register can hold and do simultaneous operations (math) on: The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (inx86-64mode, from XMM0–XMM15 to YMM0–YMM15). The legacySSEinstructions can still be utilized via theVEX prefixto operate on the lower 128 bits of the YMM registers. AVX introduces a three-operand SIMD instruction format calledVEX coding scheme, where the destination register is distinct from the two source operands. For example, anSSEinstruction using the conventional two-operand forma←a+bcan now use a non-destructive three-operand formc←a+b, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such asBMI. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced withAVX-512. Thealignmentrequirement of SIMD memory operands is relaxed.[5]Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, theVMOVDQAinstruction still requires its memory operand to be aligned. The newVEX coding schemeintroduces a new set of code prefixes that extends theopcodespace, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need forVZEROUPPERandVZEROALL. The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.[6] These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands. Issues regarding compatibility between future Intel and AMD processors are discussed underXOP instruction set. AVX adds new register-state through the 256-bit wide YMM register file, so explicitoperating systemsupport is required to properly save and restore AVX's expanded registers betweencontext switches. The following operating system versions support AVX: Advanced Vector Extensions 2 (AVX2), also known asHaswell New Instructions,[24]is an expansion of the AVX instruction set introduced in Intel'sHaswell microarchitecture. AVX2 makes the following additions: Sometimes three-operandfused multiply-accumulate(FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its ownCPUIDflag and is described onits own pageand not below. AVX-512are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed byIntelin July 2013.[3] AVX-512 instructions are encoded with the newEVEX prefix. It allows 4 operands, 8 new 64-bitopmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memoryaddressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode. AVX-512 consists of multiple instruction subsets, not all of which are meant to be supported by all processors implementing them. The instruction set consists of the following: Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current implementations also support CD (conflict detection). All central processors with AVX-512 also support VL, DQ and BW. The ER, PF, 4VNNIW and 4FMAPS instruction set extensions are currently only implemented in Intel computing coprocessors. The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).[26]: 23 [28] ^Note 1: Intel does not officially support AVX-512 family of instructions on theAlder Lakemicroprocessors. In early 2022, Intel began disabling in silicon (fusing off) AVX-512 in Alder Lake microprocessors to prevent customers from enabling AVX-512.[29]In older Alder Lake family CPUs with some legacy combinations of BIOS and microcode revisions, it was possible to execute AVX-512 family instructions when disabling all the efficiency cores which do not contain the silicon for AVX-512.[30][31][32] AVX-VNNI is aVEX-coded variant of theAVX512-VNNIinstruction set extension. Similarly, AVX-IFMA is aVEX-coded variant ofAVX512-IFMA. These extensions provide the same sets of operations as their AVX-512 counterparts, but are limited to 256-bit vectors and do not support any additional features ofEVEXencoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. These extensions allow support of VNNI and IFMA operations even when fullAVX-512support is not implemented in the processor. AVX10, announced in July 2023,[38]is a new, "converged" AVX instruction set. It addresses several issues of AVX-512, in particular that it is split into too many parts[39](20 feature flags). The initial technical paper also made 512-bit vectors optional to support, but as of revision 3.0 vector length enumeration is removed and 512-bit vectors are mandatory.[40] AVX10 presents a simplified CPUID interface to test for instruction support, consisting of the AVX10 version number (indicating the set of instructions supported, with later versions always being a superset of an earlier one).[41]For example, AVX10.2 indicates that a CPU is capable of the second version of AVX10.[42]Initial revisions of the AVX10 technical specifications also included maximum supported vector length as part of the ISA extension name, e.g. AVX10.2/256 would mean a second version of AVX10 with vector length up to 256 bits, but later revisions made that unnecessary. The first version of AVX10, notated AVX10.1, doesnotintroduce any instructions or encoding features beyond what is already in AVX-512 (specifically, in IntelSapphire Rapids: AVX-512F, CD, VL, DQ, BW, IFMA, VBMI, VBMI2, BITALG, VNNI, GFNI, VPOPCNTDQ, VPCLMULQDQ, VAES, BF16, FP16). For CPUs supporting AVX10 and 512-bit vectors, all legacy AVX-512 feature flags will remain set to facilitate applications supporting AVX-512 to continue using AVX-512 instructions.[42] AVX10.1 was first released in IntelGranite Rapids[42](Q3 2024) and AVX10.2 will be available inDiamond Rapids.[43] APX is a new extension. It is not focused on vector computation, but provides RISC-like extensions to the x86-64 architecture by doubling the number of general-purpose registers to 32 and introducing three-operand instruction formats. AVX is only tangentially affected as APX introduces extended operands.[44][45] Since AVX instructions are wider, they consume more power and generate more heat. Executing heavy AVX instructions at high CPU clock frequencies may affect CPU stability due to excessivevoltage droopduring load transients. Some Intel processors have provisions to reduce theTurbo Boostfrequency limit when such instructions are being executed. This reduction happens even if the CPU hasn't reached its thermal and power consumption limits. OnSkylakeand its derivatives, the throttling is divided into three levels:[66][67] The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread.[66] InIce Lake, only two levels persist:[68] Rocket Lakeprocessors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size.[68]However, downclocking can still happen due to other reasons, such as reaching thermal and power limits. Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512 instructions, making it a sensible default for mixed loads.[69] On supported and unlocked variants of processors that down-clock, the clock ratio reduction offsets (typically called AVX and AVX-512 offsets) are adjustable and may be turned off entirely (set to 0x) via Intel's Overclocking / Tuning utility or in BIOS if supported there.[70]
https://en.wikipedia.org/wiki/Advanced_Vector_Extensions_2
J.D. Edwards World Solution CompanyorJD Edwards, abbreviatedJDE, was anenterprise resource planning(ERP)softwarecompany, whose namesake ERP system is still sold under ownership byOracle Corporation. JDE's products includedWorldforIBMAS/400minicomputers(the users using acomputer terminalorterminal emulator),OneWorldfor their proprietaryConfigurable Network Computingarchitecture (aclient–serverfat client), and JD EdwardsEnterpriseOne(aweb-basedthin client). The company was founded March 1977 inDenver, by Jack Thompson, C.T.P. "Chuck" Hintze, Dan Gregory, andC. Edward "Ed" McVaney. In June 2003, JD Edwards agreed to sell itself toPeopleSoftfor $1.8 billion. Within days, Oracle launched a hostile takeover bid for PeopleSoft sans JD Edwards.[1][2]PeopleSoft went ahead with the JD Edwards acquisition anyway, and in 2005,Oracle Corporationfinally took ownership of the combined JD Edwards-PeopleSoft organization. As of 2020, Oracle continues to sell and actively support both ERP packages, branded now as JD Edwards EnterpriseOne[3]and JD Edwards World.[4] Ed McVaneyoriginally trained as anengineerat theUniversity of Nebraska, and in 1964 was employed by Western Electric, then byPeat Marwick, and moved toDenver, in 1968, and later became a partner at Alexander Grant where he hired Jack Thompson and Dan Gregory. Around that time he was realizing that, in his words, "The culture of a public accounting firm is the antithesis of developing software. The idea of spending time on something that you’re not getting paid for—software development—they just could not stomach that."[5]McVaney felt that accounting clients did not understand what was required for software development, and decided to start his own firm. "JD Edwards" was founded in 1977 by Jack Thompson, Dan Gregory, and Ed McVaney; the company's name is drawn from the initials "J" for Jack, "D" for Dan, and "Edwards" for Ed. McVaney took a salary cut from $44,000 to $36,000 to ensure initial funding. Start-up clients included McCoy Sales, a wholesale distribution company in Denver, and Cincinnati Milacron, a maker of machine tools. The business received a $75,000 contract to develop wholesale distribution system software and a $50,000 contract with the Colorado Highway Department to develop governmental and construction cost accounting systems. The first international client wasShell Oil Company. Shell Oil implemented JD Edwards inCanadaand then inCameroon. Gregory flew to Shell Oil inDouala, Cameroon to install the company's first international, multi-national, multi-currency client software system. As the majority of JD Edwards's customers weremedium-sized companies, clients did not have large-scale software implementations. There was a basic business need for all accounting to be tightly integrated. As McVaney would explain in 2002, integrated systems were created precisely because "you can’t go into a moderate-sized company and just put in a payroll. You have to put in a payroll and job cost, general ledger, inventory, fixed assets and the whole thing.SAPhad the same advantage that JD Edwards had because we worked on smaller companies, we were forced to see the whole broad picture."[5]This requirement was relevant to both JDE clients in the US and Europe and their European competitor SAP, whose typical clients were much smaller than the AmericanFortune 500firms. McVaney and his company developed what would be calledEnterprise Resource Planning(ERP) software in response to that business requirement. The software ultimately sold was namedJD Edwards WorldSoftware, popularly calledWorld. Development began usingSystem/34and/36minicomputers, changing course in the mid-1980s to theSystem/38, later switching to theAS/400platform when it became available. The company's initial focus was on developing theaccountingsoftware needed for their clients. World was server-centric as well as multi-user; the users would access the system using one of severalIBMcomputer terminalsor "green-screens". (Later on, users would runterminal emulatorsoftware on their personal computers). As anERP system, World comprised the three basic areas of expertise:functional/business analyst,programmer/software developer, andCNC/system administration. By late 1996, JD Edwards delivered to its customers the result of a major corporate initiative: the software was now ported to platform-independentclient–serversystems. It was brandedJD Edwards OneWorld, an entirely new product with agraphical user interfaceand adistributed computingmodel replacing the old server-centric model. The architecture JD Edwards had developed for this newer technology, calledConfigurable Network Computingor CNC, transparently shielded business applications from the servers that ran those same applications, the databases in which the data were stored, and the underlying operating system and hardware. By first quarter 1998, JD Edwards had 26 OneWorld customers and was moving its medium-sized customers to the new client–server flavor of ERP. By second quarter 1998, JDE had 48 customers,[6]and by 2001, the company had more than 600 customers using OneWorld, a fourfold increase over 2000.[7] The company became publicly listed on September 24, 1997, with vice-president Doug Massingill being promoted tochief executive officer, at an initial price of $23 per share, trading onNASDAQunder the symbol JDEC. By 1998, JD Edwards' revenue was more than $934 million and McVaney decided to retire. Within a year of the release of OneWorld, customers and industry analysts were discussing serious reliability, unpredictability and other bug-related issues. In user group meetings, these issues were raised with JDE management. So serious were these major quality issues with OneWorld that customers began to raise the possibility of class-action lawsuits, leading to McVaney's return from retirement as CEO. At an internal meeting in 2000, McVaney said he had decided to "wait however long it took to have OneWorld 100% reliable" and had thus delayed the release of a new version of OneWorld because he "wasn't going to let it go out on the street until it was ready for prime time." McVaney also encouraged customer feedback by supporting an independent JD Edwards user group calledQuest International. After delaying the upgrade for one year and refusing all requests by marketing for what he felt was a premature release, in the fall of 2000 JD Edwards released version B7333, now rebranded as OneWorld Xe. Despite press skepticism, Xe proved to be the most stable release to date and went a long way toward restoring customer confidence. McVaney retired again in January 2002, although remaining a director, and Robert Dutkowsky fromTeradynewas appointed as the new president and CEO. After the release of Xe, the product began to go through more broad change and several new versions. A newweb-based client, in which the user accesses the JD Edwards software through their web browser, was introduced in 2001. This web-based client was robust enough for customer use and was given application version number 8.10 in 2005. Initial issues with release 8.11 in 2005 lead to a quick service pack to version 8.11 SP1, salvaging the reputation of that product. By 2006, version 8.12 was announced. Throughout the application releases, new releases of system/foundation code called Tools Releases were announced, moving from Tools Release versions 8.94 to 8.95. Tools Release 8.96, along with the application's upgrade to version 8.12, saw the replacement of the older, often unstable proprietary object specifications (also called "specs") with a new XML-based system, proving to be much more reliable. Tools Release 8.97 shipped a newweb servicelayer allowing the JD Edwards software to communicate with third-party systems. In June 2003, the JD Edwards board agreed to an offer in whichPeopleSoft, a former competitor of JD Edwards, would acquire JD Edwards.[1]The takeover was completed in July. OneWorld was added to PeopleSoft's software line, along with PeopleSoft's flagship product Enterprise, and was renamedJD Edwards EnterpriseOne.[2] Within days of the PeopleSoft announcement,Oracle Corporationmounted a hostile takeover bid of PeopleSoft. Although the first attempts to purchase the company were rebuffed by the PeopleSoft board of directors, by December 2004 the board decided to accept Oracle's offer. The final purchase went through in January 2005; Oracle now owned both PeopleSoft and JD Edwards. Most JD Edwards customers, employees, and industry analysts predicted Oracle would kill the JD Edwards products. However, Oracle saw a position for JDE in the medium-sized company space that was not filled with either its e-Business Suite or its newly acquired PeopleSoft Enterprise product. Oracle's JD Edwards products are known as JD Edwards EnterpriseOne and JD Edwards World. Oracle announced that JD Edwards support would continue until at least 2033.[8] Support for the older releases such as the Xe product were to expire by 2013, spurring the acceptance of upgrades to newer application releases. By 2015, the latest offering of EnterpriseOne was application version 9.2, released October 2015.[9]The latest version of World (now with a web-based interface) was version A9.4, released in April 2015.[10] Shortly after Oracle acquired PeopleSoft and JD Edwards in 2005, Oracle announced the development of a new product calledOracle Fusion Applications.[11]Fusion was designed to co-exist or replace JD Edwards EnterpriseOne and World, as well as Oracle E-Business Applications Suite and other products acquired by Oracle, and was finally released in September 2010.[12]Despite the release of Fusion apps, JD Edwards EnterpriseOne and World is still sold and supported by Oracle and runs numerous businesses worldwide. System
https://en.wikipedia.org/wiki/JD_Edwards
Thenearest referentis a grammatical term sometimes used when two or more possible referents of a pronoun, or other part of speech, cause ambiguity in a text. However "nearness", proximity, may not be the most meaningful criterion for a decision, particularly whereword order,inflectionand other aspects ofsyntaxare more relevant. The concept of nearest referent is found in analysis of various languages, including classical languages Greek,[1]Latin[2]and Arabic.[3][4]It may create or resolve variant views in interpretation of a text. There are other models than nearest referent for deciding what a pronoun, or other part of speech, refers to, andreference orderdistinguishespronoun-referent structureswhere: This is also described asanaphoricreference (anaphor, previous referent) andcataphoricreference (cataphor, following referent).[6]
https://en.wikipedia.org/wiki/Nearest_referent
TheViterbi algorithmis adynamic programmingalgorithmfor obtaining themaximum a posteriori probability estimateof the mostlikelysequence of hidden states—called theViterbi path—that results in a sequence of observed events. This is done especially in the context ofMarkov information sourcesandhidden Markov models(HMM). The algorithm has found universal application in decoding theconvolutional codesused in bothCDMAandGSMdigital cellular,dial-upmodems, satellite, deep-space communications, and802.11wireless LANs. It is now also commonly used inspeech recognition,speech synthesis,diarization,[1]keyword spotting,computational linguistics, andbioinformatics. For example, inspeech-to-text(speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal. The Viterbi algorithm is named afterAndrew Viterbi, who proposed it in 1967 as a decoding algorithm forconvolutional codesover noisy digital communication links.[2]It has, however, a history ofmultiple invention, with at least seven independent discoveries, including those by Viterbi,Needleman and Wunsch, andWagner and Fischer.[3]It was introduced tonatural language processingas a method ofpart-of-speech taggingas early as 1987. Viterbi pathandViterbi algorithmhave become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities.[3]For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is commonly called the "Viterbi parse".[4][5][6]Another application is intarget tracking, where the track is computed that assigns a maximum likelihood to a sequence of observations.[7] Given a hidden Markov model with a set of hidden statesS{\displaystyle S}and a sequence ofT{\displaystyle T}observationso0,o1,…,oT−1{\displaystyle o_{0},o_{1},\dots ,o_{T-1}}, the Viterbi algorithm finds the most likely sequence of states that could have produced those observations. At each time stept{\displaystyle t}, the algorithm solves the subproblem where only the observations up toot{\displaystyle o_{t}}are considered. Two matrices of sizeT×|S|{\displaystyle T\times \left|{S}\right|}are constructed: Letπs{\displaystyle \pi _{s}}andar,s{\displaystyle a_{r,s}}be the initial and transition probabilities respectively, and letbs,o{\displaystyle b_{s,o}}be the probability of observingo{\displaystyle o}at states{\displaystyle s}. Then the values ofP{\displaystyle P}are given by the recurrence relation[8]Pt,s={πs⋅bs,otift=0,maxr∈S(Pt−1,r⋅ar,s⋅bs,ot)ift>0.{\displaystyle P_{t,s}={\begin{cases}\pi _{s}\cdot b_{s,o_{t}}&{\text{if }}t=0,\\\max _{r\in S}\left(P_{t-1,r}\cdot a_{r,s}\cdot b_{s,o_{t}}\right)&{\text{if }}t>0.\end{cases}}}The formula forQt,s{\displaystyle Q_{t,s}}is identical fort>0{\displaystyle t>0}, except thatmax{\displaystyle \max }is replaced witharg⁡max{\displaystyle \arg \max }, andQ0,s=0{\displaystyle Q_{0,s}=0}. The Viterbi path can be found by selecting the maximum ofP{\displaystyle P}at the final timestep, and followingQ{\displaystyle Q}in reverse. The time complexity of the algorithm isO(T×|S|2){\displaystyle O(T\times \left|{S}\right|^{2})}. If it is known which state transitions have non-zero probability, an improved bound can be found by iterating over only thoser{\displaystyle r}which link tos{\displaystyle s}in the inner loop. Then usingamortized analysisone can show that the complexity isO(T×(|S|+|E|)){\displaystyle O(T\times (\left|{S}\right|+\left|{E}\right|))}, whereE{\displaystyle E}is the number of edges in the graph, i.e. the number of non-zero entries in the transition matrix. A doctor wishes to determine whether patients are healthy or have a fever. The only information the doctor can obtain is by asking patients how they feel. The patients may report that they either feel normal, dizzy, or cold. It is believed that the health condition of the patients operates as a discreteMarkov chain. There are two states, "healthy" and "fever", but the doctor cannot observe them directly; they arehiddenfrom the doctor. On each day, the chance that a patient tells the doctor "I feel normal", "I feel cold", or "I feel dizzy", depends only on the patient's health condition on that day. Theobservations(normal, cold, dizzy) along with thehiddenstates (healthy, fever) form a hidden Markov model (HMM). From past experience, the probabilities of this model have been estimated as: In this code,initrepresents the doctor's belief about how likely the patient is to be healthy initially. Note that the particular probability distribution used here is not the equilibrium one, which would be{'Healthy': 0.57, 'Fever': 0.43}according to the transition probabilities. The transition probabilitiestransrepresent the change of health condition in the underlying Markov chain. In this example, a patient who is healthy today has only a 30% chance of having a fever tomorrow. The emission probabilitiesemitrepresent how likely each possible observation (normal, cold, or dizzy) is, given the underlying condition (healthy or fever). A patient who is healthy has a 50% chance of feeling normal; one who has a fever has a 60% chance of feeling dizzy. A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day. Firstly, the probabilities of being healthy or having a fever on the first day are calculated. The probability that a patient will be healthy on the first day and report feeling normal is0.6×0.5=0.3{\displaystyle 0.6\times 0.5=0.3}. Similarly, the probability that a patient will have a fever on the first day and report feeling normal is0.4×0.1=0.04{\displaystyle 0.4\times 0.1=0.04}. The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day and reporting to be cold, following reporting being normal on the first day, is the maximum of0.3×0.7×0.4=0.084{\displaystyle 0.3\times 0.7\times 0.4=0.084}and0.04×0.4×0.4=0.0064{\displaystyle 0.04\times 0.4\times 0.4=0.0064}. This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering. The rest of the probabilities are summarised in the following table: From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.01512. This sequence is precisely (healthy, healthy, fever), which can be found be tracing back which states were used when calculating the maxima (which happens to be the best guess from each day but will not always be). In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day. The operation of Viterbi's algorithm can be visualized by means of atrellis diagram. The Viterbi path is essentially the shortest path through this trellis. A generalization of the Viterbi algorithm, termed themax-sum algorithm(ormax-product algorithm) can be used to find the most likely assignment of all or some subset oflatent variablesin a large number ofgraphical models, e.g.Bayesian networks,Markov random fieldsandconditional random fields. The latent variables need, in general, to be connected in a way somewhat similar to ahidden Markov model(HMM), with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involvesmessage passingand is substantially similar to thebelief propagationalgorithm (which is the generalization of theforward-backward algorithm). With an algorithm callediterative Viterbi decoding, one can find the subsequence of an observation that matches best (on average) to a given hidden Markov model. This algorithm is proposed by Qi Wang et al. to deal withturbo code.[9]Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence. An alternative algorithm, theLazy Viterbi algorithm, has been proposed.[10]For many applications of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the originalViterbi decoder(using Viterbi algorithm). While the original Viterbi algorithm calculates every node in thetrellisof possible outcomes, the Lazy Viterbi algorithm maintains a prioritized list of nodes to evaluate in order, and the number of calculations required is typically fewer (and never more) than the ordinary Viterbi algorithm for the same result. However, it is not so easy[clarification needed]to parallelize in hardware. Thesoft output Viterbi algorithm(SOVA) is a variant of the classical Viterbi algorithm. SOVA differs from the classical Viterbi algorithm in that it uses a modified path metric which takes into account thea priori probabilitiesof the input symbols, and produces asoftoutput indicating thereliabilityof the decision. The first step in the SOVA is the selection of the survivor path, passing through one unique node at each time instant,t. Since each node has 2 branches converging at it (with one branch being chosen to form theSurvivor Path, and the other being discarded), the difference in the branch metrics (orcost) between the chosen and discarded branches indicate theamount of errorin the choice. Thiscostis accumulated over the entire sliding window (usually equalsat leastfive constraint lengths), to indicate thesoft outputmeasure of reliability of thehard bit decisionof the Viterbi algorithm.
https://en.wikipedia.org/wiki/Viterbi_algorithm
Indiscrete geometry, anopaque setis a system of curves or other set in theplanethat blocks alllines of sightacross apolygon, circle, or other shape. Opaque sets have also been calledbarriers,beam detectors,opaque covers, or (in cases where they have the form of aforestofline segmentsor other curves)opaque forests. Opaque sets were introduced byStefan Mazurkiewiczin 1916,[1]and the problem of minimizing their total length was posed byFrederick Bagemihlin 1959.[2] For instance, visibility through aunit squarecan be blocked by its four boundary edges, with length 4, but a shorter opaque forest blocks visibility across the square with length2+126≈2.639{\displaystyle {\sqrt {2}}+{\tfrac {1}{2}}{\sqrt {6}}\approx 2.639}. It is unproven whether this is the shortest possible opaque set for the square, and for most other shapes this problem similarly remains unsolved. The shortest opaque set for any boundedconvex setin the plane has length at most theperimeterof the set, and at least half the perimeter. For the square, a slightly stronger lower bound than half the perimeter is known. Another convex set whose opaque sets are commonly studied is theunit circle, for which the shortestconnectedopaque set has length2+π{\displaystyle 2+\pi }. Without the assumption of connectivity, the shortest opaque set for the circle has length at leastπ{\displaystyle \pi }and at most4.7998{\displaystyle 4.7998}. Several publishedalgorithmsclaiming to find the shortest opaque set for aconvex polygonwere later shown to be incorrect. Nevertheless, it is possible to find an opaque set with a guaranteedapproximation ratioinlinear time, or to compute the subset of the plane whose visibility is blocked by a given system of line segments inpolynomial time. Every setS{\displaystyle S}in the plane blocks the visibility through a superset ofS{\displaystyle S}, itscoverageC{\displaystyle C}.C{\displaystyle C}consists of points for which all lines through the point intersectS{\displaystyle S}. If a given setK{\displaystyle K}forms a subset of the coverage ofS{\displaystyle S}, thenS{\displaystyle S}is said to be anopaque set,barrier,beam detector, oropaque coverforK{\displaystyle K}. If, additionally,S{\displaystyle S}has a special form, consisting of finitely manyline segmentswhose union forms aforest, it is called anopaque forest. There are many possible opaque sets for any given setK{\displaystyle K}, includingK{\displaystyle K}itself, and many possible opaque forests. For opaque forests, or more generally for systems ofrectifiable curves, their length can be measured in the standard way. For more general point sets, one-dimensionalHausdorff measurecan be used, and agrees with the standard length in the cases of line segments and rectifiable curves.[3] Most research on this problem assumes that the given setK{\displaystyle K}is aconvex set. When it is not convex but merely aconnected set, it can be replaced by itsconvex hullwithout changing its opaque sets. Some variants of the problem restrict the opaque set to lie entirely inside or entirely outsideK{\displaystyle K}. In this case, it is called aninterior barrieror anexterior barrier, respectively. When this is not specified, the barrier is assumed to have no constraints on its location. Versions of the problem in which the opaque set must be connected or form a single curve have also been considered. It is not known whether everyconvex setP{\displaystyle P}has a shortest opaque set, or whether instead the lengths of its opaque sets might approach aninfimumwithout ever reaching it.[3]Every opaque set forP{\displaystyle P}can be approximated arbitrarily closely in length by an opaque forest,[4]and it has been conjectured that everyconvex polygonhas an opaque forest as its shortest opaque set, but this has not been proven.[3] When the region to be covered is aconvex set, the length of its shortest opaque set must be at least half its perimeter and at most its perimeter. For some regions, additional improvements to these bounds can be made. IfK{\displaystyle K}is a bounded convex set to be covered, then itsboundary∂K{\displaystyle \partial K}forms an opaque set whose length is the perimeter|∂K|{\displaystyle |\partial K|}. Therefore, the shortest possible length of an opaque set is at most the perimeter. For setsK{\displaystyle K}that are strictly convex, meaning that there are no line segments on the boundary, and for interior barriers, this bound is tight. Every point on the boundary must be contained in the opaque set, because every boundary point has atangent linethrough it that cannot be blocked by any other points.[5]The same reasoning shows that for interior barriers ofconvex polygons, allverticesmust be included. Therefore, theminimum Steiner treeof the vertices is the shortestconnectedopaque set, and thetraveling salesperson pathof the vertices is the shortestsingle-curveopaque set.[4]However, for interior barriers of non-polygonal convex sets that are not strictly convex, or for barriers that are not required to be connected, other opaque sets may be shorter; for instance, it is always possible to omit the longest line segment of the boundary. In these cases, the perimeter or Steiner tree length provide anupper boundon the length of an opaque set.[3][4] There are several proofs that an opaque set for any convex setK{\displaystyle K}must have total length at least|∂K|/2{\displaystyle |\partial K|/2}, half the perimeter. One of the simplest involves theCrofton formula, according to which the length of any curve is proportional to its expected number of intersection points with a random line from an appropriateprobability distributionon lines. It is convenient to simplify the problem by approximatingK{\displaystyle K}by a strictly convex superset, which can be chosen to have perimeter arbitrarily close to the original set. Then, except for the tangent lines toK{\displaystyle K}(which form a vanishing fraction of all lines), a line that intersectsK{\displaystyle K}crosses its boundary twice. Therefore, if a random line intersectsK{\displaystyle K}with probabilityp{\displaystyle p}, the expected number of boundary crossings is2p{\displaystyle 2p}. But each line that intersectsK{\displaystyle K}intersects its opaque set, so the expected number of intersections with the opaque set is at leastp{\displaystyle p}, which is at least half that forK{\displaystyle K}. By the Crofton formula, the lengths of the boundary and barrier have the same proportion as these expected numbers.[6] This lower bound of|∂K|/2{\displaystyle |\partial K|/2}on the length of an opaque set cannot be improved to have a larger constant factor than 1/2, because there exist examples of convex sets that have opaque sets whose length is close to this lower bound. In particular, for very long thin rectangles, one long side and two short sides form a barrier, with total length that can be made arbitrarily close to half the perimeter. Therefore, among lower bounds that consider only the perimeter of the coverage region, the bound of|∂K|/2{\displaystyle |\partial K|/2}is best possible.[6]However, getting closer to|∂K|/2{\displaystyle |\partial K|/2}in this way involves considering a sequence of shapes rather than just a single shape, because for any convex setK{\displaystyle K}that is not a triangle, there exists aδ{\displaystyle \delta }such that all opaque sets have length at least|∂K|/2+δ{\displaystyle |\partial K|/2+\delta }.[7] For atriangle, as for any convex polygon, the shortest connected opaque set is its minimum Steiner tree.[8]In the case of a triangle, this tree can be described explicitly: if the widest angle of the triangle is2π/3{\displaystyle 2\pi /3}(120°) or more, it uses the two shortest edges of the triangle, and otherwise it consists of three line segments from the vertices to theFermat pointof the triangle.[9]However, without assuming connectivity, the optimality of the Steiner tree has not been demonstrated. Izumi has proven a small improvement to the perimeter-halving lower bound for theequilateral triangle.[10] For aunit square, the perimeter is 4, the perimeter minus the longest edge is 3, and the length of the minimum Steiner tree is1+3≈2.732{\displaystyle 1+{\sqrt {3}}\approx 2.732}. However, a shorter, disconnected opaque forest is known, with length2+126≈2.639{\displaystyle {\sqrt {2}}+{\tfrac {1}{2}}{\sqrt {6}}\approx 2.639}. It consists of the minimum Steiner tree of three of the square's vertices, together with a line segment connecting the fourth vertex to the center.Ross Honsbergercredits its discovery to Maurice Poirier, a Canadian schoolteacher,[11]but it was already described in 1962 and 1964 by Jones.[12][13]It is known to be optimal among forests with only two components,[5][14]and has been conjectured to be the best possible more generally, but this remains unproven.[7]The perimeter-halving lower bound of 2 for the square, already proven by Jones,[12][13]can be improved slightly, to2.00002{\displaystyle 2.00002}, for any barrier that consists of at most countably manyrectifiable curves,[7]improving similar previous bounds that constrained the barrier to be placed only near to the given square.[6] The case of theunit circlewas described in a 1995Scientific Americancolumn byIan Stewart, with a solution of length2+π{\displaystyle 2+\pi },[15]optimal for a single curve or connected barrier[8][16][17]but not for an opaque forest with multiple curves.Vance FaberandJan Mycielskicredit this single-curve solution toMenachem Magidorin 1974.[8]By 1980, E. Makai had already provided a better three-component solution, with length approximately4.7998{\displaystyle 4.7998},[18]rediscovered by John Day in a followup to Stewart's column.[19]The unknown length of the optimal solution has been called thebeam detection constant.[20] Two published algorithms claim to generate the optimal opaque forest for arbitrary polygons, based on the idea that the optimal solution has a special structure: a Steiner tree for one triangle in atriangulation of the polygon, and a segment in each remaining triangle from one vertex to the opposite side, of length equal to the height of the triangle. This structure matches the conjectured structure of the optimal solution for a square. Although the optimal triangulation for a solution of this form is not part of the input to these algorithms, it can be found by the algorithms inpolynomial timeusingdynamic programming.[21][22]However, these algorithms do not correctly solve the problem for all polygons, because some polygons have shorter solutions with a different structure than the ones they find. In particular, for a long thin rectangle, the minimum Steiner tree of all four vertices is shorter than the triangulation-based solution that these algorithms find.[23]No known algorithm has been guaranteed to find a correct solution to the problem, regardless of its running time.[3] Despite this setback, the shortest single-curve barrier of a convex polygon, which is the traveling salesperson path of its vertices, can be computed exactly inpolynomial timefor convex polygons by adynamic programmingalgorithm, in models of computation for whichsums of radicalscan be computed exactly.[4]There has also been more successful study ofapproximation algorithmsfor the problem, and for determining the coverage of a given barrier. By the general bounds for opaque forest length in terms of perimeter, the perimeter of a convex set approximates its shortest opaque forest to within a factor of two in length. In two papers, Dumitrescu, Jiang, Pach, and Tóth provide severallinear-timeapproximation algorithms for the shortest opaque set for convex polygons, with betterapproximation ratiosthan two: Additionally, because the shortest connected interior barrier of a convex polygon is given by the minimum Steiner tree, it has apolynomial-time approximation scheme.[4] The region covered by a given forest can be determined as follows: If the input consists ofn{\displaystyle n}line segments formingm{\displaystyle m}connected components, then each of then{\displaystyle n}setsCp{\displaystyle C_{p}}consists of at most2m{\displaystyle 2m}wedges. It follows that the combinatorial complexity of the coverage region, and the time to construct it, isO(m2n2){\displaystyle O(m^{2}n^{2})}as expressed inbig O notation.[25] Although optimal in the worst case for inputs whose coverage region has combinatorial complexity matching this bound, this algorithm can be improved heuristically in practice by a preprocessing phase that merges overlapping pairs of hulls until all remaining hulls are disjoint, in timeO(nlog2⁡n){\displaystyle O(n\log ^{2}n)}. If this reduces the input to a single hull, the more expensive sweeping and intersecting algorithm need not be run: in this case the hull is the coverage region.[26] Mazurkiewicz (1916)showed that it is possible for an opaque set to avoid containing any nontrivial curves and still have finite total length.[1]A simplified construction ofBagemihl (1959), shown in the figure, produces an example for the unit square. The construction begins with line segments that form an opaque set with an additional property: the segments of negative slope block all lines of non-negative slope, while the segments of positive slope block all lines of non-positive slope. In the figure, the initial segments with this property are four disjoint segments along the diagonals of the square. Then, it repeatedly subdivides these segments while maintaining this property. At each level of the construction, each line segment is split by a small gap near its midpoint into two line segments, with slope of the same sign, that together block all lines of the opposite sign that were blocked by the original line segment. Thelimit setof this construction is aCantor spacethat, like all intermediate stages of the construction, is an opaque set for the square. With quickly decreasing gap sizes, the construction produces a set whoseHausdorff dimensionis one, and whose one-dimensionalHausdorff measure(a notion of length suitable for such sets) is finite.[2] Thedistance setsof the boundary of a square, or of the four-segment shortest known opaque set for the square, both contain all distances in the interval from 0 to2{\displaystyle {\sqrt {2}}}. However, by using similar fractal constructions, it is also possible to find fractal opaque sets whose distance sets omit infinitely many of the distances in this interval, or that (assuming thecontinuum hypothesis) form aset of measure zero.[2] Opaque sets were originally studied byStefan Mazurkiewiczin 1916.[1]Other early works on opaque sets include the papers ofH. M. Sen Guptaand N. C. Basu Mazumdar in 1955,[27]and byFrederick Bagemihlin 1959,[2]but these are primarily about the distance sets and topological properties of barriers rather than about minimizing their length. In a postscript to his paper, Bagemihl asked for the minimum length of an interior barrier for the square,[2]and subsequent work has largely focused on versions of the problem involving length minimization. They have been repeatedly posed, with multiple colorful formulations: digging a trench of as short a length as possible to find a straight buried telephone cable,[8]trying to find a nearby straight road while lost in a forest,[17]swimming to a straight shoreline while lost at sea,[4]efficiently painting walls to render a glass house opaque,[28]etc. The problem has also been generalized to sets that block allgeodesicson aRiemannian manifold,[29][30]or that block lines through sets in higher-dimensions. In three dimensions, the corresponding question asks for a collection of surfaces of minimum total area that blocks all visibility across a solid. However, for some solids, such as a ball, it is not clear whether such a collection exists, or whether instead the area has aninfimumthat cannot be attained.[8][31]
https://en.wikipedia.org/wiki/Opaque_forest_problem
This is aglossary of commutative algebra. See alsolist of algebraic geometry topics,glossary of classical algebraic geometry,glossary of algebraic geometry,glossary of ring theoryandglossary of module theory. In this article, all rings are assumed to becommutativewith identity 1.
https://en.wikipedia.org/wiki/Glossary_of_commutative_algebra
Linguistic anthropologyis theinterdisciplinarystudy of how language influences social life. It is a branch ofanthropologythat originated from the endeavor to documentendangered languagesand has grown over the past century to encompass most aspects oflanguage structureand use.[1] Linguistic anthropology explores how language shapes communication, forms social identity and group membership, organizes large-scale cultural beliefs andideologies, and develops a common cultural representation of natural andsocial worlds.[2] Linguistic anthropology emerged from the development of three distinctparadigmsthat have set the standard for approaching linguistic anthropology. The first, now known as "anthropological linguistics," focuses on the documentation of languages. The second, known as "linguistic anthropology," engages in theoretical studies of language use. The third, developed over the past two or three decades, studies issues from other subfields of anthropology with linguistic considerations. Though they developed sequentially, all three paradigms are still practiced today.[3] The first paradigm, anthropological linguistics, is devoted to themes unique to the sub-discipline. This area includes documentation oflanguagesthat have been seen as at-risk forextinction, with a particular focus on indigenous languages of native North American tribes. It is also the paradigm most focused on linguistics.[3]Linguistic themes include the following: The second paradigm can be marked by reversing the words. Going fromanthropological linguisticstolinguistic anthropology, signals a more anthropological focus on the study. This term was preferred byDell Hymes, who was also responsible, withJohn Gumperz, for the idea ofethnographyofcommunication. The termlinguistic anthropologyreflected Hymes' vision of a future where language would be studied in the context of the situation and relative to the community speaking it.[3]This new era would involve many new technological developments, such as mechanical recording. This paradigm developed in critical dialogue with the fields offolkloreon the one hand andlinguisticson the other. Hymes criticized folklorists' fixation on oral texts rather than the verbal artistry of performance.[4]At the same time, he criticized the cognitivist shift in linguistics heralded by the pioneering work ofNoam Chomsky, arguing for an ethnographic focus on language in use. Hymes had many revolutionary contributions to linguistic anthropology, the first of which was a newunit of analysis. Unlike the first paradigm, which focused on linguistic tools like measuring ofphonemesandmorphemes, the second paradigm's unit of analysis was the "speech event". A speech event is defined as one with speech presented for a significant duration throughout its occurrence (ex., a lecture or debate). This is different from a speech situation, where speech could possibly occur (ex., dinner). Hymes also pioneered a linguistic anthropological approach toethnopoetics. Hymes had hoped that this paradigm would link linguistic anthropology more to anthropology. However, Hymes' ambition backfired as the second paradigm marked a distancing of the sub-discipline from the rest of anthropology.[5][6] The third paradigm, which began in the late 1980s, redirected the primary focus on anthropology by providing a linguistic approach to anthropological issues. Rather than prioritizing the technical components of language, third paradigm anthropologists focus on studying culture through the use of linguistic tools. Themes include: Furthermore, similar to how the second paradigm used new technology in its studies, the third paradigm heavily includes use of video documentation to support research.[3] Contemporary linguistic anthropology continues research in all three paradigms described above: The third paradigm, the study of anthropological issues through linguistic means, is an affluent area of study for current linguistic anthropologists. A great deal of work in linguistic anthropology investigates questions of socioculturalidentitylinguistically and discursively. Linguistic anthropologistDon Kulickhas done so in relation to identity, for example, in a series of settings, first in a village calledGapunin northernPapua New Guinea.[7]He explored how the use of two languages with and around children in Gapun village: the traditional language (Taiap), not spoken anywhere but in their own village and thus primordially "indexical" of Gapuner identity, andTok Pisin, the widely circulating official language of New Guinea. ("indexical" points to meanings beyond the immediate context.)[8]To speak theTaiap languageis associated with one identity: not only local but "Backward" and also an identity based on the display of hed (personal autonomy). To speak Tok Pisin is toindexa modern, Catholic identity, based not on hed but on save, an identity linked with the will and the skill to cooperate. In later work, Kulick demonstrates that certain loud speech performances in Brazil called um escândalo, Braziliantravesti(roughly, 'transvestite') sex workers shame clients. The travesti community, the argument goes, ends up at least making a powerful attempt to transcend the shame the larger Brazilian public might try to foist off on them, again by loud public discourse and other modes ofperformance.[9] In addition, scholars such asÉmile Benveniste,[10]Mary BucholtzandKira Hall[11]Benjamin Lee,[12]Paul Kockelman,[13]andStanton Wortham[14](among many others) have contributed to understandings of identity as "intersubjectivity" by examining the ways it is discursively constructed. In a series of studies, linguistic anthropologistsElinor OchsandBambi Schieffelinaddressed the anthropological topic ofsocialization(the process by which infants, children, and foreigners become members of a community, learning to participate in its culture), using linguistic and other ethnographic methods.[15]They discovered that the processes ofenculturationand socialization do not occur apart from the process oflanguage acquisition, but that children acquire language and culture together in what amounts to an integrated process. Ochs and Schieffelin demonstrated thatbaby talkis notuniversal, that the direction of adaptation (whether the child is made to adapt to the ongoing situation of speech around it or vice versa) was a variable that correlated, for example, with the direction it was held vis-à-vis a caregiver's body. In many societies caregivers hold a child facing outward so as to orient it to a network of kin whom it must learn to recognize early in life. Ochs and Schieffelin demonstrated that members of all societies socialize children bothtoandthroughthe use of language. Ochs and Schieffelin uncovered how, through naturally occurring stories told during dinners in whitemiddle classhouseholds inSouthern California, both mothers and fathers participated in replicatingmale dominance(the "father knows best" syndrome) by the distribution of participant roles such as protagonist (often a child but sometimes mother and almost never the father) and "problematizer" (often the father, who raised uncomfortable questions or challenged the competence of the protagonist). When mothers collaborated with children to get their stories told, they unwittingly set themselves up to be subject to this process. Schieffelin's more recent research has uncovered the socializing role ofpastorsand other fairly new Bosavi converts in theSouthern Highlands, Papua New Guineacommunity she studies.[16][17][18][19]Pastors have introduced new ways of conveying knowledge, new linguisticepistemicmarkers[16]—and new ways of speaking about time.[18]And they have struggled with and largely resisted those parts of the Bible that speak of being able to know the inner states of others (e.g. thegospel of Mark, chapter 2, verses 6–8).[19] In a third example of the current (third) paradigm, sinceRoman Jakobson's studentMichael Silversteinopened the way, there has been an increase in the work done by linguistic anthropologists on the major anthropological theme ofideologies,[20]—in this case "language ideologies", sometimes defined as "shared bodies ofcommonsensenotions about the nature of language in the world."[21]Silverstein has demonstrated that these ideologies are not merefalse consciousnessbut actually influence the evolution of linguistic structures, including the dropping of "thee" and "thou" from everydayEnglishusage.[22]Woolard, in her overview of "code switching", or the systematic practice of alternating linguistic varieties within a conversation or even a single utterance, finds the underlying question anthropologists ask of the practice—Why do they do that?—reflects a dominant linguistic ideology. It is the ideology that people should "really" be monoglot and efficiently targeted toward referential clarity rather than diverting themselves with the messiness of multiple varieties in play at a single time.[23] Much research on linguistic ideologies probes subtler influences on language, such as the pull exerted on Tewa, a Kiowa-Tanoan language spoken in certain New Mexican pueblos and on the Hopi Reservation in Arizona, by "kiva speech", discussed in the next section.[24] Other linguists have carried out research in the areas oflanguage contact,language endangerment, and 'English as a global language'. For instance, Indian linguistBraj Kachruinvestigated local varieties of English in South Asia, the ways in whichEnglish functions as a lingua francaamong multicultural groups in India.[25]British linguist David Crystal has contributed to investigations oflanguage deathattention to the effects of cultural assimilation resulting in the spread of one dominant language in situations of colonialism.[26] More recently, a new line of ideology work is beginning to enter the field oflinguisticsin relation toheritage languages. Specifically, applied linguistMartin Guardadohas posited that heritage language ideologies are "somewhat fluid sets of understandings, justifications, beliefs, and judgments that linguistic minorities hold about their languages."[27]Guardadogoes on to argue that ideologies of heritage languages also contain the expectations and desires of linguistic minority families "regarding the relevance of these languages in their children’s lives as well as when, where, how, and to what ends these languages should be used." Although this is arguably a fledgling line of language ideology research, this work is poised to contribute to the understanding of how ideologies of language operate in a variety of settings. In a final example of this third paradigm, a group of linguistic anthropologists have done very creative work on the idea of social space. Duranti published a groundbreaking article onSamoangreetingsand their use and transformation of social space.[28]Before that, Indonesianist Joseph Errington, making use of earlier work by Indonesianists not necessarily concerned with language issues per se, brought linguistic anthropological methods (andsemiotictheory) to bear on the notion of the exemplary center, the center of political and ritual power from which emanated exemplary behavior.[29]Errington demonstrated how theJavanesepriyayi, whose ancestors served at the Javanese royal courts, became emissaries, so to speak, long after those courts had ceased to exist, representing throughout Java the highest example of "refined speech." The work of Joel Kuipers develops this theme vis-a-vis the island ofSumba,Indonesia. And, even though it pertains toTewaIndians inArizonarather than Indonesians,Paul Kroskrity's argument that speech forms originating in the Tewakiva(or underground ceremonial space) forms the dominant model for all Tewa speech can be seen as a direct parallel. Silverstein tries to find the maximum theoretical significance and applicability in this idea of exemplary centers. He feels, in fact, that the exemplary center idea is one of linguistic anthropology's three most important findings. He generalizes the notion thus, arguing "there are wider-scale institutional 'orders of interactionality,' historically contingent yet structured. Within such large-scale, macrosocial orders, in-effectritualcenters ofsemiosiscome to exert a structuring,value-conferring influence on any particular event of discursiveinteractionwith respect to the meanings and significance of the verbal and other semiotic forms used in it."[30]Current approaches to such classic anthropological topics as ritual by linguistic anthropologists emphasize not static linguistic structures but the unfolding in realtime of a"'hypertrophic' set of parallel orders oficonicityand indexicality that seem to cause the ritual to create its own sacred space through what appears, often, to be themagicof textual and nontextual metricalizations, synchronized."[30][31] Addressing the broad central concerns of the subfield and drawing from its core theories, many scholars focus on the intersections of language and the particularly salient social constructs of race (and ethnicity), class, and gender (and sexuality). These works generally consider the roles of social structures (e.g., ideologies and institutions) related to race, class, and gender (e.g., marriage, labor, pop culture, education) in terms of their constructions and in terms of individuals' lived experiences. A short list of linguistic anthropological texts that address these topics follows: Ethnopoetics is a method of recording text versions of oral poetry or narrative performances (i.e. verbal lore) that uses poetic lines, verses, and stanzas (instead of prose paragraphs) to capture the formal, poetic performance elements which would otherwise be lost in the written texts. The goal of any ethnopoetic text is to show how the techniques of unique oral performers enhance the aesthetic value of their performances within their specific cultural contexts. Major contributors to ethnopoetic theory include Jerome Rothenberg, Dennis Tedlock, and Dell Hymes. Ethnopoetics is considered a subfield of ethnology, anthropology, folkloristics, stylistics, linguistics, and literature and translation studies. Endangered languagesare languages that are not being passed down to children as their mother tongue or that have declining numbers of speakers for a variety of reasons. Therefore, after a couple generations these languages may no longer be spoken.[32]Anthropologists have been involved with endangered language communities through their involvement in language documentation and revitalization projects. In alanguage documentationproject, researchers work to develop records of the language - these records could be field notes and audio or video recordings. To follow best practices of documentation, these records should be clearly annotated and kept safe within an archive of some kind.Franz Boaswas one of the first anthropologists involved in language documentation within North America and he supported the development of three key materials: 1) grammars, 2) texts, and 3) dictionaries. This is now known as the Boasian Trilogy.[33] Language revitalization is the practice of bringing a language back into common use. The revitalization efforts can take the form of teaching the language to new speakers or encouraging the continued use within the community.[34]One example of a language revitalization project is the Lenape language course taught at Swathmore College, Pennsylvania. The course aims to educate indigenous and non-indigenous students about the Lenape language and culture.[35] Language reclamation, as a subset of revitalization, implies that a language has been taken away from a community and addresses their concern in taking back the agency to revitalize their language on their own terms. Language reclamation addresses the power dynamics associated with language loss. Encouraging those who already know the language to use it, increasing the domains of usage, and increasing the overall prestige of the language are all components of reclamation. One example of this is the Miami language being brought back from 'extinct' status through extensive archives.[36] While the field of linguistics has also been focused on the study of the linguistic structures of endangered languages, anthropologists also contribute to this field through their emphasize onethnographicunderstandings of the socio-historical context of language endangerment, but also of language revitalization and reclamation projects.[37] The Jurgen Trabant Wilhelm von Humboldt Lectures (7hrs)
https://en.wikipedia.org/wiki/Linguistic_anthropology
Backportingis the action of taking parts from a newerversionof asoftware systemorsoftware componentandportingthem to an older version of the same software. It forms part of themaintenancestep in asoftware development process, and it is commonly used for fixingsecurity issuesin older versions of the software and also for providing new features to older versions. The simplest and probably most common situation of backporting is a fixed security hole in a newer version of a piece of software. Consider this simplified example: By taking the modification that fixes Software v2.0 and changing it so that it applies to Software v1.0, one has effectively backported the fix.[1] In real-life situations, the modifications that a single aspect of the software has undergone may be simple (only a few lines ofcodehave changed) up to heavy and massive (many modifications spread across multiplefilesof the code). In the latter case, backporting may become tedious and inefficient and should only be undertaken if the older version of the software is really needed in favour of the newer (if, for example, the newer version still suffersstabilityproblems that prevent its use in mission-critical situations).[2] The process of backporting can be roughly divided into these steps:[1] Usually, multiple such modifications are bundled in apatchset. Backports can be provided by the coredevelopergroup of the software. Since backporting needs access to the source code of a piece of software, this is the only way that backporting is done forclosed source software– the backports will usually be incorporated inbinaryupgradesalong the old version line of the software. Withopen-source software, backports are sometimes created bysoftware distributorsand later sentupstream(that is, submitted to the core developers of the afflicted software).[2]
https://en.wikipedia.org/wiki/Backporting
Acontactless smart cardis a contactless credential whose dimensions arecredit cardsize. Its embedded integrated circuits can store (and sometimes process) data and communicate with a terminal viaNFC. Commonplace uses include transit tickets, bank cards and passports. There are two broad categories of contactless smart cards. Memory cards contain non-volatile memory storage components, and perhaps some specific security logic. Contactless smart cards contain read-onlyRFIDcalled CSN (Card Serial Number) or UID, and a re-writeable smart cardmicrochipthat can be transcribed via radio waves. A contactless smart card is characterized as follows: Contactless smart cards can be used for identification, authentication, and data storage.[2]They also provide a means of effecting business transactions in a flexible, secure, standard way with minimal human intervention. Contactless smart cards were first used for electronic ticketing in 1995 in Seoul, South Korea.[3][4] Since then, smart cards with contactless interfaces have been increasingly popular for payment and ticketing applications such as mass transit. Globally, contactless fare collection is being employed for efficiencies in public transit. The various standards emerging are local in focus and are not compatible, though theMIFAREClassic card from Philips has a large market share in the United States and Europe. In more recent times,VisaandMasterCardhave agreed to standards for general "open loop" payments on their networks, with millions of cards deployed in the U.S.,[5]in Europe and around the world. Smart cards are being introduced in personal identification and entitlement schemes at regional, national, and international levels. Citizen cards, drivers’ licenses, and patient card schemes are becoming more prevalent. In Malaysia, the compulsory national ID schemeMyKadincludes 8 different applications and is rolled out for 18 million users. Contactless smart cards are being integrated intoICAObiometric passportsto enhance security for international travel. With theCOVID-19 pandemic, demand for and usage of contactless credit and debit cards has increased, although coins and banknotes are generally safe and this technology will thus not reduce the spread of the virus. Contactless smart card readers use radio waves to communicate with, and both read and write data on a smart card. When used for electronic payment, they are commonly located nearPIN pads, cash registers and other places of payment. When the readers are used for public transit they are commonly located on fare boxes, ticket machines, turnstiles, and station platforms as a standalone unit. When used for security, readers are usually located to the side of an entry door. A contactless smart card is a card in which the chip communicates with the card reader through an induction technology similar to that of anRFID(at data rates of 106 to 848 kbit/s). These cards require only close proximity to an antenna to complete a transaction. They are often used when transactions must be processed quickly or hands-free, such as on mass transit systems, where a smart card can be used without even removing it from awallet. The standard for contactless smart card communications isISO/IEC 14443. It defines two types of contactless cards ("A" and "B")[6]and allows for communications at distances up to 10 cm (3.9 in)[citation needed]. There had been proposals for ISO/IEC 14443 types C, D, E, F and G that have been rejected by the International Organization for Standardization. An alternative standard for contactless smart cards isISO/IEC 15693, which allows communications at distances up to 50 cm (1.6 ft). Examples of widely used contactless smart cards areSeoul'sUpass(1996),MalaysiaTouch 'n Gocard (1997),Hong Kong'sOctopus card,Shanghai'sPublic Transportation Card(1999),Paris'sNavigo card,Japan Rail'sSuicaCard (2001),Singapore'sEZ-Link,Taiwan'sEasyCard,San Francisco Bay Area'sClipper Card(2002),London'sOyster card,Beijing'sMunicipal Administration and Communications Card(2003),South Korea'sT-money,Southern Ontario'sPresto card,India'sMore Card,Israel'sRav-Kav Card(2008),Melbourne'sMyki cardandSydney'sOpal cardwhich predate the ISO/IEC 14443 standard. The following tables list smart cards used forpublic transportationand otherelectronic purseapplications. A related contactless technology isRFID(radio frequency identification). In certain cases, it can be used for applications similar to those of contactless smart cards, such as forelectronic toll collection. RFID devices usually do not include writeable memory or microcontroller processing capability as contactless smart cards often do.[dubious–discuss] There are dual-interface cards that implement contactless and contact interfaces on a single card with some shared storage and processing. An example isPorto's multi-application transport card, calledAndante, that uses a chip in contact and contactless (ISO/IEC 14443 type B) mode. Like smart cards with contacts, contactless cards do not have a battery. Instead, they use a built-ininductor, using the principle ofresonant inductive coupling, to capture some of the incident electromagnetic signal,rectifyit, and use it to power the card's electronics. Since the start of using theSeoul Transportation Card, numerous cities have moved to the introduction of contactless smart cards as the fare media in anautomated fare collectionsystem.[citation needed] In a number of cases these cards carry anelectronic walletas well as fare products, and can be used for low-value payments. Starting around 2005, a major application of the technology has beencontactless paymentcredit and debit cards. Some major examples include: Roll-outs started in 2005 in the United States, and in 2006 in some parts of Europe and Asia (Singapore).[9]In the U.S., contactless (nonPIN) transactions cover a payment range of ~$5–$100. In general there are two classes of contactless bank cards: magnetic stripe data (MSD) and contactlessEMV. Contactless MSD cards are similar to magnetic stripe cards in terms of the data they share across the contactless interface. They are only distributed in the U.S. Payment occurs in a similar fashion to mag-stripe, without a PIN and often in off-line mode (depending on parameters of the terminal). The security level of such a transaction is better than a mag-stripe card, as the chip cryptographically generates a code which can be verified by the card issuer's systems. Contactless EMV cards have two interfaces (contact and contactless) and work as a normal EMV card via their contact interface. The contactless interface provides similar data to a contact EMV transaction, but usually a subset of the capabilities (e.g. usually issuers will not allow balances to be increased via the contactless interface, instead requiring the card to be inserted into a device which uses the contact interface). EMV cards may carry an "offline balance" stored in their chip, similar to theelectronic walletor "purse" that users of transit smart cards are used to. A quickly growing application is in digital identification cards. In this application, the cards are used forauthenticationof identity. The most common example is in conjunction with aPKI. The smart card will store an encrypted digital certificate issued from the PKI along with any other relevant or needed information about the card holder. Examples include theU.S. Department of Defense(DoD)Common Access Card(CAC), and the use of various smart cards by many governments as identification cards for their citizens. When combined with biometrics, smart cards can provide two- or three-factor authentication. Smart cards are not always a privacy-enhancing technology, for the subject carries possibly incriminating information about him all the time. By employing contactless smart cards, that can be read without having to remove the card from the wallet or even the garment it is in, one can add even more authentication value to the human carrier of the cards. The Malaysian government uses smart card technology in theidentity cardscarried by all Malaysian citizens and resident non-citizens. The personal information inside the smart card (calledMyKad) can be read using special APDU commands.[10] Smart cards have been advertised as suitable for personal identification tasks, because they are engineered to betamper resistant. The embedded chip of a smart card usually implements somecryptographic algorithm. However, there are several methods of recovering some of the algorithm's internal state. Differential power analysis[11]involves measuring the precise time andelectric current[dubious–discuss]required for certain encryption or decryption operations. This is most often used against public key algorithms such asRSAin order to deduce the on-chip private key, although some implementations of symmetric ciphers can be vulnerable to timing or power attacks as well. Smart cards can be physically disassembled by using acid, abrasives, or some other technique to obtain direct, unrestricted access to the on-board microprocessor. Although such techniques obviously involve a fairly high risk of permanent damage to the chip, they permit much more detailed information (e.g. photomicrographs of encryption hardware) to be extracted. Short distance (≈10 cm. or 4″) is required for supplying power. The radio frequency, however, can be eavesdropped within several meters once powered-up.[12]
https://en.wikipedia.org/wiki/Contactless_smart_card
Consumer protectionis the practice of safeguarding buyers of goods and services, and the public, against unfair practices in themarketplace.Consumerprotection measures are often established by law. Such laws are intended to prevent businesses from engaging infraudor specifiedunfair practicesto gain an advantage over competitors or to mislead consumers. They may also provide additional protection for the general public which may be impacted by a product (or its production) even when they are not the direct purchaser or consumer of that product. For example, government regulations may require businesses to disclose detailed information about their products—particularly in areas where public health or safety is an issue, such as with food or automobiles. Consumer protection is linked to the idea ofconsumer rightsand to the formation ofconsumer organizations, which help consumers make better choices in the marketplace and pursue complaints against businesses. Entities that promote consumer protection include government organizations (such as theFederal Trade Commissionin theUnited States), self-regulating business organizations (such as theBetter Business Bureausin the US,Canada,England, etc.), andnon-governmental organizationsthat advocate for consumer protection laws and help to ensure their enforcement (such as consumer protection agencies and watchdog groups).[citation needed] A consumer is defined as someone who acquires goods or services for direct use or ownership rather than for resale or use in production and manufacturing. Consumer interests can also serve consumers, consistent with economic efficiency, but this topic is treated in competition law. Consumer protection can also be asserted vianon-governmentorganizations and individuals as consumer activism. Efforts made for the protection of consumer's rights and interests are: Consumer protection law or consumer law is considered as an area of law that regulatesprivate lawrelationships between individual consumers and the businesses that sell those goods and services. Consumer protection covers a wide range of topics, including but not necessarily limited toproduct liability,privacy rights,unfair business practices,fraud,misrepresentation, and other consumer/business interactions. It is a way of preventing frauds and scams from service and sales contracts, eligible fraud, bill collector regulation, pricing, utility turnoffs, consolidation,personal loansthat may lead tobankruptcy. There have been some arguments that consumer law is also a better way to engage in large-scale redistribution thantax lawbecause it does not necessitate legislation and can be more efficient, given the complexities of tax law.[1] InAustralia, the corresponding agency is theAustralian Competition and Consumer Commissionor the individual State Consumer Affairs agencies. TheAustralian Securities and Investments Commissionhas responsibility for consumer protection regulation of financial services and products. However, in practice, it does so through privately run EDR schemes such as theAustralian Financial Complaints Authority. In Brazil, consumer protection is regulated by the Consumer's Defense Code (Código de Defesa do Consumidor),[2]as mandated by the1988 Constitution of Brazil. Brazilian law mandates "The offer and presentation of products or services must ensure correct, clear, accurate and conspicuous information in the Portuguese language about their characteristics, qualities, quantity, composition, price, guarantee, validity and origin, among other data, as well as the risks they pose to the health and safety of consumers."[3]In Brazil, the consumer does not have to bring forward evidence that the defender is guilty. Instead, the defense has to bring forward evidence that they are innocent.[2]In the case of Brazil, they narrowlydefine what a consumer, supplier, product, and services[pt]are, so that they can protect consumers from international channels trade laws and protect them from negligence and misconduct from international suppliers. Several regulations in theEuropean Unionare concerned with consumer protection, including theRegulation on general product safety(GPSR) and theDirective (EU) 2024/2853on liability for defective products. Germany, as a member state of theEuropean Union, is bound by the consumer protectiondirectivesof the European Union; residents may be directly bound by EU regulations. A minister of the federal cabinet is responsible for consumer rights and protection (Verbraucherschutzminister). In thecurrent cabinetofFriedrich Merz, this isCarsten Schneider. When issuing public warnings about products and services, the issuing authority has to take into account that this affects the supplier's constitutionally protected economic liberty, seeBundesverwaltungsgericht(Federal Administrative Court) Case 3 C 34.84, 71 BVerwGE 183.[4] InIndia, consumer protection is specified in TheConsumer Protection Act, 2019. Under this law, Separate Consumer Dispute Redress Forums have been set up throughout India in every district in which a consumer can file their complaint on a simple paper with nominal court fees and their complaint will be decided by the Presiding Officer of the District Level. The complaint can be filed by both the consumer of a goods as well as of the services. An appeal could be filed to the State Consumer Disputes Redress Commissions and after that to the National Consumer Disputes RedresaRedressalsion (NCDRC).[5]The procedures in thesetribunalsare relatively less formal and more people-friendly and they also take less time to decide upon a consumer dispute[6]when compared to the years-long time taken by the traditionalIndian judiciary. In recent years, many effective judgments have been passed by some state and National Consumer Forums. Indian Contract Act, 1872lays down the conditions in which promises made by parties to a contract will be legally binding on each other. It also lays down the remedies available to the aggregate party if the other party fails to honor their promise. The Sale of Goods Act of 1930 provides some safeguards to buyers of goods if goods purchased do not fulfill the express or implied conditions and warranties. The Agriculture Produce Act of 1937 act provides grade standards for agricultural commodities and livestock products. It specifies the conditions which govern the use of standards and lays down the procedure for grading, marking, and packaging of agricultural produce. The quality mark provided under the act is known asAGMARK-Agriculture Marketing. The Nigerian government must protect its people from any form of harm to human health through the use and purchase of items to meet daily needs. In light of this, theFederal Competition and Consumer Protection Commission (FCCPC), whose aim is to protect and enhance consumers' interest through information, education, and enforcement of the rights of consumers was established by an Act of Parliament o promote and protect the interest of consumers over all products and services. In a nutshell, it is empowered to eliminate hazardous & substandard goods from the market. Provide speedy redress to consumer complaints and petition arisen from fraud, unfair practice, and exploitation of the consumer. On 5 February 2019, the President of Nigeria, Muhammadu Buhari, assented to the new Federal Competition and Consumer Protection Commission Bill, 2018. Thus, the bill became a law of the Federal Republic of Nigeria and binding on entities and organizations so specified in the Act. The long title of the Act reads: "This Act establishes the Federal Competition and Consumer Protection Commission and the Competition and Consumer Protection Tribunal for the promotion of competition in the Nigerian market at all levels by eliminating monopolies, prohibiting abuse of dominant market position and penalizing other restrictive trade and business practices." The Act further repealed the hitherto Nigerian Consumer Protection Council Act and transferred its core mandate to the new Commission. Modern Taiwanese law has been heavily influenced by the European civil law systems, particularly German and Swiss law. The Civil Code in Taiwan contains five books: General Principles, Obligations, Rights over Things, Family, and Succession. The second book of the Code, the Book of Obligations, provided the basis from which consumers could bring product liability actions prior to the enactment of the CPL.[7][8] The Consumer Protection Law (CPL) inTaiwan, as promulgated on 11 January 1994, and effective on 13 January 1993, specifically protects the interests and safety of customers using the products or services provided by business operators. The Consumer Protection Commission of Executive Yuan serves as an ombudsman supervising, coordinating, reporting any unsafe products/services, and periodically reviewing the legislation. According to the Pacific Rim Law & Policy Association and the American Chamber of Commerce, in a 1997 critical study, the law has been criticized by stating that "although many agree that the intent of the CPL is fair, the CPL's various problems, such as ambiguous terminology, favoritism towards consumer protection groups, and the compensation liability defense, must be addressed before the CPL becomes a truly effective piece of legislation that will protect consumers"[9] The main consumer protection laws in the UK are theConsumer Protection Act 1987and theConsumer Rights Act 2015. TheUnited Kingdomhas left theEuropean Union, but during the transition period (until end of 2020) the UK was still bound bydirectivesof the European Union. Specifics of the division of roles between the EU and the UK are detailed here.[10]Domestic (UK) laws originated within the ambit ofcontractandtortbut, with the influence ofEU law, it is emerging as an independent area of law. In many circumstances, where domestic law is in question, the matter is judicially treated astort,contract,restitutionor evencriminal law.[citation needed] Consumer protection issues were dealt with by theOffice of Fair Tradingbefore 2014. Since then, theCompetition and Markets Authorityhas taken on this role.[11] In theUnited Statesa variety of laws at both the federal and state levels regulate consumer affairs. Among them are theFederal Food, Drug, and Cosmetic Act,Fair Debt Collection Practices Act, theFair Credit Reporting Act,Truth in Lending Act,Fair Credit Billing Act, and theGramm–Leach–Bliley Act. Federal consumer protection laws are mainly enforced by theFederal Trade Commission, theConsumer Financial Protection Bureau, theFood and Drug Administration, and theU.S. Department of Justice. At the state level, many states have adopted the Uniform Deceptive Trade Practices Act[12]including, but not limited to, Delaware,[13]Illinois,[14]Maine,[15]and Nebraska.[16]The deceptive trade practices prohibited by the Uniform Act can be roughly subdivided into conduct involving either a) unfair or fraudulent business practices and b) untrue or misleading advertising. The Uniform Act contains a private remedy with attorneys fees for prevailing parties where the losing party "willfully engaged in the trade practice knowing it to be deceptive". Uniform Act §3(b). Missouri has a similar statute called the Merchandising Practices Act.[17]This statute allows local prosecutors or the Attorney General to press charges against people who knowingly use deceptive business practices in a consumer transaction and authorizes consumers to hire a private attorney to bring an action seeking their actual damages, punitive damages, and attorney's fees. Also, the majority of states have a Department of Consumer Affairs devoted to regulating certain industries and protecting consumers who use goods and services from those industries. For example, in California, theCalifornia Department of Consumer Affairsregulates about 2.3 million professionals in over 230 different professions, through its forty regulatory entities. In addition, California encourages its consumers to act asprivate attorneys generalthrough the liberal provisions of itsConsumers Legal Remedies Act. State and federal laws provide for "cooling off" periods giving consumers the right to cancel contracts within a certain time period for several specified types of transactions, potentially including transactions entered into at home, and warranty and repair services contracts.[18][19] Other states have been the leaders in specific aspects of consumer protection. For example, Florida, Delaware, and Minnesota have legislated requirements that contracts be written at reasonable readability levels as a large proportion of contracts cannot be understood by most consumers who sign them.[20] Considering the state of Massachusetts, the Massachusetts Consumer Protection Law, MGL 93A, clearly highlights the rights and violations of consumer protection law in the state. The chapter explains what actions are considered illegal under the law for which a party can seek monetary damages from the other party at fault.[21]Some examples of practices that constitute a Chapter 93A violation would be when: The laws under MGL 93A prohibit activities that relate to overpricing to a consumer and the use of "Bait and Switch" techniques. A court will award the plaintiff the damages if they can prove the (1) defendant knowingly and intentionally violated the MGL 93A agreement or (2) the defendant would not "grant relief in bad faith" knowing that the actions violated the MGL 93A agreement.[22]Additionally, failure to disclose refund/ return policy, warranties, and critical information about the product/service are all in violation of the legislation, and can result in triple damages and lawyer fees.[22] -Media related toConsumer protectionat Wikimedia Commons
https://en.wikipedia.org/wiki/Consumer_protection
Algorithmic artoralgorithm artis art, mostlyvisual art, in which the design is generated by analgorithm. Algorithmic artists are sometimes called algorists. Algorithmic art is created in the form of digital paintings andsculptures,interactive installationsandmusic compositions.[2] Algorithmic art is not a newconcept.Islamic artis a good example of the tradition of following a set of rules to createpatterns. The even older practice ofweavingincludes elements of algorithmic art.[3] Ascomputersdeveloped so did the art created with them. Algorithmic art encouragesexperimentationallowing artists to push theircreativityin thedigital age. Algorithmic art allows creators to devise intricate patterns and designs that would be nearly impossible to achieve by hand.[4]Creators have a say on what the input criteria is, but not on the outcome.[5] Algorithmic art, also known as computer-generated art, is a subset ofgenerative art(generated by an autonomous system) and is related tosystems art(influenced by systems theory).Fractal artis an example of algorithmic art.[6]Fractal art is bothabstractand mesmerizing.[2] For an image of reasonable size, even the simplestalgorithmsrequire too much calculation for manual execution to be practical, and they are thus executed on either a single computer or on a cluster of computers. The final output is typically displayed on acomputer monitor, printed with araster-type printer, or drawn using aplotter. Variability can be introduced by usingpseudo-randomnumbers. There is no consensus as to whether the product of an algorithm that operates on an existing image (or on any input other than pseudo-random numbers) can still be considered computer-generated art, as opposed to computer-assisted art.[6] Roman Verostkoargues thatIslamic geometric patternsare constructed using algorithms, as areItalian Renaissancepaintings which make use ofmathematical techniques, in particularlinear perspectiveand proportion.[7] Some of the earliest known examples of computer-generated algorithmic art were created byGeorg Nees,Frieder Nake,A. Michael Noll,Manfred MohrandVera Molnárin the early 1960s. These artworks were executed by a plotter controlled by a computer, and were therefore computer-generated art but notdigital art. The act of creation lay in writing theprogram, which specified the sequence of actions to be performed by the plotter.Sonia Landy Sheridanestablished Generative Systems as a program at theSchool of the Art Institute of Chicagoin 1970 in response to social change brought about in part by the computer-robot communications revolution.[8]Her early work with copier and telematic art focused on the differences between the human hand and the algorithm.[9] Aside from the ongoing work of Roman Verostko and his fellow algorists, the next known examples are fractal artworks created in the mid to late 1980s. These are important here because they use a different means of execution. Whereas the earliest algorithmic art was "drawn" by aplotter, fractal art simply creates an image incomputer memory; it is therefore digital art. The native form of a fractal artwork is an image stored on a computer –this is also true of very nearly all equation art and of most recent algorithmic art in general. However, in a stricter sense "fractal art" is not considered algorithmic art, because the algorithm is not devised by the artist.[6] In light of such ongoing developments, pioneer algorithmic artistErnest Edmondshas documented the continuing prophetic role of art in human affairs by tracing the early 1960s association between art and the computer up to a present time in which the algorithm is now widely recognized as a key concept for society as a whole.[10] While art has strong emotional and psychological ties, it also depends heavily on rational approaches. Artists have to learn how to use various tools, theories and techniques to be able to create impressive artwork. Thus, throughout history, many art techniques were introduced to create various visual effects. For example,Georges-Pierre Seuratinventedpointillism, a painting technique that involves placing dots of complementary colors adjacent to each other.[11]CubismandColor Theoryalso helped revolutionize visual arts.Cubisminvolved taking various reference points for the object and creating a 2-Dimensional rendering.Color Theory, stating that all colors are a combination of the three primary colors (Red, Green and Blue), also helped facilitate the use of colors in visual arts and in the creation of distinct colorful effects.[11]In other words, humans have always found algorithmic ways and discovered patterns to create art. Such tools allowed humans to create more visually appealing artworks efficiently. In such ways, art adapted to become more methodological. Another important aspect that allowed art to evolve into its current form isperspective. Perspective allows the artist to create a 2-Dimensional projection of a 3-Dimensional object. Muslim artists during theIslamic Golden Ageemployedlinear perspectivein most of their designs. The notion of perspective was rediscovered by Italian artists during the Renaissance. TheGolden Ratio, a famous mathematical ratio, was utilized by manyRenaissanceartists in their drawings.[11]Most famously,Leonardo DaVinciemployed that technique in hisMona Lisa, and many other paintings, such asSalvator Mundi.[12]This is a form of using algorithms in art. By examining the works of artists in the past, from the Renaissance and Islamic Golden Age, a pattern of mathematical patterns, geometric principles and natural numbers emerges. From one point of view, for a work of art to be considered algorithmic art, its creation must include a process based on analgorithmdevised by the artist. An artists may also select parameters and interact as the composition is generated. Here, an algorithm is simply a detailed recipe for the design and possibly execution of an artwork, which may includecomputer code,functions,expressions, or other input which ultimately determines the form the art will take.[7]This input may bemathematical,computational, or generative in nature. Inasmuch as algorithms tend to bedeterministic, meaning that their repeated execution would always result in the production of identical artworks, some external factor is usually introduced. This can either be a random number generator of some sort, or an external body of data (which can range from recorded heartbeats to frames of a movie.) Some artists also work with organically based gestural input which is then modified by an algorithm. By this definition,fractalsmade by a fractal program are not art, as humans are not involved. However, defined differently, algorithmic art can be seen to include fractal art, as well as other varieties such as those usinggenetic algorithms. The artistKerry Mitchellstated in his 1999Fractal Art Manifesto:[13][6][14] Fractal Art is not..Computer(ized) Art, in the sense that the computer does all the work. The work is executed on a computer, but only at the direction of the artist. Turn a computer on and leave it alone for an hour. When you come back, no art will have been generated.[13] "Algorist" is a term used fordigital artistswho create algorithmic art.[7]Pioneering algorists includeVera Molnár,Dóra MaurerandGizella Rákóczy.[15] Algorists formally began correspondence and establishing their identity as artists following a panel titled "Art and Algorithms" atSIGGRAPHin 1995. The co-founders wereJean-Pierre HébertandRoman Verostko. Hébert is credited with coining the term and its definition, which is in the form of his own algorithm:[7] Artists can write code that createscomplexand dynamic visual compositions.[2] Cellular automatacan be used to generate artistic patterns with an appearance of randomness, or to modify images such as photographs by applying a transformation such as the stepping stone rule (to give an impressionist style) repeatedly until the desired artistic effect is achieved.[16]Their use has also been explored in music.[17] Fractal art consists of varieties of computer-generatedfractalswith colouring chosen to give an attractive effect.[18]Especially in the western world, it is not drawn or painted by hand. It is usually created indirectly with the assistance offractal-generating software,iteratingthrough three phases: settingparametersof appropriate fractal software; executing the possibly lengthy calculation; and evaluating the product. In some cases, othergraphics programsare used to further modify the images produced. This is called post-processing. Non-fractal imagery may also be integrated into the artwork.[19] Genetic or evolutionary art makes use ofgenetic algorithmsto develop images iteratively, selecting at each "generation" according to a rule defined by the artist.[20][21] Algorithmic art is not only produced by computers. Wendy Chun explains:[22] Software is unique in its status as metaphor for metaphor itself. As A universal imitator/machine, it encapsulates a logic of general substitutability; a logic of ordering and creative, animating disordering. Joseph Weizenbaum has argued that computers have become metaphors for "effective procedures," that is, for anything that can be solved in a prescribed number of steps, such as gene expression and clerical work.[22] The American artist,Jack Ox, has used algorithms to produce paintings that arevisualizations of musicwithout using a computer. Two examples arevisual performancesof extant scores, such asAnton Bruckner'sEighth Symphony[23][24]andKurt Schwitters'Ursonate.[25][26]Later, she and her collaborator, Dave Britton, created the 21st Century Virtual Color Organ that does use computer coding and algorithms.[27] Since 1996 there have beenambigram generatorsthat auto generate ambigrams.[28][29][30] In modern times, humans have witnessed a drastic change in their lives. One such glaring difference is the need for more comfortable andaestheticenvironment. People have started to show particular interest towards decorating their environment with paintings. While it is not uncommon to see renowned, famousoil paintingsin certain environments, it is still unusual to find such paintings in an ordinary family house. Oil paintings can be costly, even if its a copy of the painting. Thus, many people prefer simulating such paintings.[31]With the emergence of Artificial Intelligence, such simulations have become possible. Artificial intelligence image processors utilize an algorithm and machine learning to produce the images for the user.[31] Recent studies and experiments have shown thatartificial intelligence, using algorithms andmachine learning, is able to replicate oil paintings. The image look relatively accurate and identical to the original image.[31]Such improvements in algorithmic art and artificial intelligence can make it possible for many people to own renowned paintings, at little to no cost. This could prove to be revolutionary for various environments, especially with the rapid rise in demand for improved aesthetic. Using the algorithm, the simulator can create images with an accuracy of 48.13% to 64.21%, which would be imperceptible to most humans. However, the simulations are not perfect and are bound to error. They can sometimes give inaccurate, extraneous images. Other times, they can completely malfunction and produce a physically impossible image. However, with the emergence of newer technologies and finer algorithms, research are confident that simulations could witness a massive improvement.[31]Other contemporary outlooks on art have focused heavily on making art more interactive. Based on the environment or audiencefeedback, the algorithm is fine-tuned to create a more appropriate and appealing output. However, such approaches have been criticized since the artist is not responsible for every detail in the painting. Merely, the artist facilitates the interaction between the algorithm and its environment and adjusts it based on the desired outcome.[32]
https://en.wikipedia.org/wiki/Algorithmic_art
Spectral musicuses theacousticproperties of sound – orsound spectra– as a basis forcomposition.[1] Defined in technical language, spectral music is an acoustic musical practice wherecompositionaldecisions are often informed bysonographicrepresentations andmathematicalanalysis of sound spectra, or by mathematically generated spectra. The spectral approach focuses on manipulating the spectral features, interconnecting them, and transforming them. In this formulation, computer-based sound analysis and representations of audio signals are treated as being analogous to atimbralrepresentation of sound. The (acoustic-composition) spectral approach originated in France in the early 1970s, and techniques were developed, and later refined, primarily atIRCAM, Paris, with theEnsemble l'Itinéraire, by composers such asGérard GriseyandTristan Murail.Hugues Dufourtis commonly credited for introducing the termmusique spectrale(spectral music) in an article published in 1979.[1][2]Murail has described spectral music as anaestheticrather than a style, not so much a set of techniques as an attitude; asJoshua Finebergputs it, a recognition that "music is ultimately sound evolving in time".[3]Julian Andersonindicates that a number of major composers associated with spectralism consider the term inappropriate, misleading, and reductive.[4]The Istanbul Spectral Music Conference of 2003 suggested a redefinition of the term "spectral music" to encompass any music that foregrounds timbre as an important element of structure or language.[5] While spectralism as a historical movement is generally considered to have begun in France and Germany in the 1970s, precursors to the philosophy and techniques of spectralism, as prizing the nature and properties of sound above all else as an organizing principle for music, go back at least to the early twentieth century. Proto-spectral composers includeClaude Debussy,Edgard Varèse,Giacinto Scelsi,Olivier Messiaen,György Ligeti,Iannis Xenakis,La Monte Young, andKarlheinz Stockhausen.[6][7][8]Other composers who anticipated spectralist ideas in their theoretical writings includeHarry Partch,Henry Cowell, andPaul Hindemith.[9]Also crucial to the origins of spectralism was the development of techniques of sound analysis and synthesis incomputer musicand acoustics during this period, especially focused around IRCAM in France and Darmstadt in Germany.[10] Julian Anderson considers Danish composerPer Nørgård'sVoyage into the Golden Screenfor chamber orchestra (1968) to be the first "properly instrumental piece of spectral composition".[11]Spectralism as a recognizable and unified movement, however, arose during the early 1970s, in part as a reaction against and alternative to the primarily pitch focused aesthetics of theserialismand post-serialism which was ascendant at the time.[a]Early spectral composers were centered in the cities of Paris and Cologne and associated with the composers of theEnsemble l'Itinéraireand the Feedback group, respectively. In Paris,Gérard GriseyandTristan Murailwere the most prominent pioneers of spectral techniques; Grisey'sEspaces Acoustiquesand Murail'sGondwanawere two influential works of this period. Their early work emphasized the use of the overtone series, techniques ofspectral analysisand ring and frequency modulation, and slowly unfolding processes to create music which gave a new attention to timbre and texture.[12] The German Feedback group, includingJohannes Fritsch,Mesías Maiguashca,Péter Eötvös,Claude Vivier, andClarence Barlow, was primarily associated with students and disciples of Karlheinz Stockhausen, and began to pioneer spectral techniques around the same time. Their work generally placed more emphasis on linear and melodic writing within a spectral context as compared to that of their French contemporaries, though with significant variations.[13]Another important group of early spectral composers was centered in Romania, where a unique form of spectralism arose, in part inspired by Romanian folk music.[14]This folk tradition, as collected byBéla Bartók(1904–1918), with its acoustic scales derived directly from resonance and natural wind instruments of thealphornfamily, like thebuciumeandtulnice, as well as thecimpoibagpipe, inspired several spectral composers, includingCorneliu Cezar,Anatol Vieru,Aurel Stroe,Ștefan Niculescu,Horațiu Rădulescu,Iancu Dumitrescu, andOctavian Nemescu.[15] Towards the end of the twentieth century, techniques associated with spectralist composers began to be adopted more widely and the original pioneers of spectralism began to integrate their techniques more fully with those of other traditions. For example, in their works from the later 1980s and into the 1990s, both Grisey and Murail began to shift their emphasis away from the more gradual and regular process which characterized their early work to include more sudden dramatic contrasts as more well linear and contrapuntal writing.[16]Likewise, spectral techniques were adopted by composers from a wider variety of traditions and countries, including the UK (with composers likeJulian AndersonandJonathan Harvey), Finland (composers likeMagnus LindbergandKaija Saariaho), and the United States.[17]A further development is the emergence of "hyper-spectralism"[clarification needed]in the works of Iancu Dumitrescu and Ana-Maria Avram.[18][19] The spectral adventure has allowed the renovation, without imitation of the foundations of occidental music, because it is not a closed technique but an attitude.—Gérard Grisey[20] The "panoply of methods and techniques" used are secondary, being only "the means of achieving a sonic end".[3] Spectral music focuses on the phenomenon andacousticsof sound as well as its potential semantic qualities. Pitch material and intervallic content are often derived from theharmonic series, including the use ofmicrotones. Spectrographic analysis of acoustic sources is used as inspiration fororchestration. The reconstruction of electroacoustic source materials by using acoustic instruments is another common approach to spectral orchestration. In "additive instrumental synthesis", instruments are assigned to play discrete components of a sound, such as an individualpartial.Amplitude modulation,frequency modulation,difference tones, harmonic fusion, residue pitch,Shepard-tonephenomena, and other psychoacoustic concepts are applied to music materials.[21] Formal concepts important in spectral music includeprocessand the stretching of time.[further explanation needed]Though development is "significantly different from those ofminimalist music" in that all musical parameters may be affected, it similarly draws attention to very subtle aspects of the music. These processes most often achieve a smooth transition throughinterpolation.[22]Any or all of these techniques may be operating in a particular work, though this list is not exhaustive. TheRomanianspectral tradition focuses more on the study of how sound itself behaves in a "live" environment. Sound work is not restricted to harmonic spectra but includes transitory aspects oftimbreand non-harmonicmusical components(e.g.,rhythm,tempo,dynamics). Furthermore, sound is treatedphenomenologicallyas a dynamic presence to be encountered in listening (rather than as an object of scientific study). This approach results in a transformational musical language in which continuous change of the material displaces the central role accorded to structure in spectralism of the "French school".[23] Spectral music was initially associated with composers of the FrenchEnsemble l'Itinéraire, includingHugues Dufourt,Gérard Grisey,Tristan Murail, andMichaël Lévinas. For these composers, musical sound (or natural sound) is taken as a model for composition, leading to an interest in the exploration of the interior of sounds.[24]Giacinto Scelsiwas an important influence on Grisey, Murail, and Lévinas; his approach with exploring a single sound in his works and a "smooth" conception of time (such as in hisQuattro pezzi su una nota sola) greatly influenced these composers to include new instrumental techniques and variations of timbre in their works.[25] Other spectral music composers include those from the German Feedback group, principallyJohannes Fritsch,Mesías Maiguashca,Péter Eötvös,Claude Vivier, andClarence Barlow. Features of spectralism are also seen independently in the contemporary work of Romanian composersCorneliu Cezar,Ștefan Niculescu,Horațiu Rădulescu, andIancu Dumitrescu.[1] Independent of spectral music developments in Europe, American composerJames Tenney's output included more than fifty significant works that feature spectralist traits.[26]His influences came from encounters with a scientific culture which pervaded during the postwar era, and a "quasi-empiricist musical aesthetic" fromJohn Cage.[27]His works, although having similarities with European spectral music, are distinctive in some ways, for example in his interest in "post-Cageian indeterminacy". The spectralist movement inspired more recent composers such asJulian Anderson,Ana-Maria Avram,Joshua Fineberg,Georg Friedrich Haas,Jonathan Harvey,Fabien Lévy,Magnus Lindberg, andKaija Saariaho. Some of the "post-spectralist" French composers includeÉric Tanguy[fr],Philippe Hurel,François Paris,Philippe Leroux, andThierry Blondeau.[28] In the United States, composers such asAlvin Lucier,La Monte Young,Terry Riley,Maryanne Amacher,Phill Niblock, andGlenn Brancarelate some of the influences of spectral music into their own work. Tenney's work has also influenced a number of composers such asLarry PolanskyandJohn Luther Adams.[29] In the US, jazz saxophonist and composerSteve Lehman, and in Europe, French composerFrédéric Maurin[fr;de], have both introduced spectral techniques into the domain of jazz.[30][31] Characteristic spectral pieces include: Other pieces that utilise spectral ideas or techniques include:[11][27][32] Post-spectral pieces include:[33][34] StriaandMortuos Plango, Vivos Vocoare examples ofelectronic musicthat embrace spectral techniques.[35][36]
https://en.wikipedia.org/wiki/Spectral_music
Agraph reduction machineis a special-purposecomputerbuilt to performcombinatorcalculations bygraph reduction. Examples include the SKIM ("S-K-I machine") computer, built at theUniversity of Cambridge Computer Laboratory,[1]the multiprocessor GRIP ("Graph Reduction In Parallel") computer, built atUniversity College London,[2][3]and the Reduceron, which was implemented on anFPGAwith the single purpose of executingHaskell.[4][5] Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Graph_reduction_machine
Inmathematics, specifically ingroup theory, anelementary abelian groupis anabelian groupin which all elements other than the identity have the sameorder. This common order must be aprime number, and the elementary abelian groups in which the common order ispare a particular kind ofp-group.[1][2]A group for whichp= 2 (that is, an elementary abelian 2-group) is sometimes called aBoolean group.[3] Every elementary abelianp-group is avector spaceover theprime fieldwithpelements, and conversely every such vector space is an elementary abelian group. By theclassification of finitely generated abelian groups, or by the fact that every vector space has abasis, every finite elementary abelian group must be of the form (Z/pZ)nforna non-negative integer (sometimes called the group'srank). Here,Z/pZdenotes thecyclic groupof orderp(or equivalently the integersmodp), and the superscript notation means then-folddirect product of groups.[2] In general, a (possibly infinite)elementary abelianp-groupis adirect sumof cyclic groups of orderp.[4](Note that in the finite case the direct product and direct sum coincide, but this is not so in the infinite case.) SupposeV≅{\displaystyle \cong }(Z/pZ)nis a finite elementary abelian group. SinceZ/pZ≅{\displaystyle \cong }Fp, thefinite fieldofpelements, we haveV= (Z/pZ)n≅{\displaystyle \cong }Fpn, henceVcan be considered as ann-dimensionalvector spaceover the fieldFp. Note that an elementary abelian group does not in general have a distinguished basis: choice of isomorphismV→≅{\displaystyle {\overset {\cong }{\to }}}(Z/pZ)ncorresponds to a choice of basis. To the observant reader, it may appear thatFpnhas more structure than the groupV, in particular that it has scalar multiplication in addition to (vector/group) addition. However,Vas an abelian group has a uniqueZ-modulestructure where the action ofZcorresponds to repeated addition, and thisZ-module structure is consistent with theFpscalar multiplication. That is,c⋅g=g+g+ ... +g(ctimes) wherecinFp(considered as an integer with 0 ≤c<p) givesVa naturalFp-module structure. As a finite-dimensional vector spaceVhas a basis {e1, ...,en} as described in the examples, if we take {v1, ...,vn} to be anynelements ofV, then bylinear algebrawe have that the mappingT(ei) =viextends uniquely to a linear transformation ofV. Each suchTcan be considered as a group homomorphism fromVtoV(anendomorphism) and likewise any endomorphism ofVcan be considered as a linear transformation ofVas a vector space. If we restrict our attention toautomorphismsofVwe have Aut(V) = {T:V→V| kerT= 0 } = GLn(Fp), thegeneral linear groupofn×ninvertible matrices onFp. The automorphism group GL(V) = GLn(Fp) actstransitivelyonV\ {0} (as is true for any vector space). This in fact characterizes elementary abelian groups among all finite groups: ifGis a finite group with identityesuch that Aut(G) acts transitively onG\ {e}, thenGis elementary abelian. (Proof: if Aut(G) acts transitively onG\ {e}, then all nonidentity elements ofGhave the same (necessarily prime) order. ThenGis ap-group. It follows thatGhas a nontrivialcenter, which is necessarily invariant under all automorphisms, and thus equals all ofG.) It can also be of interest to go beyond prime order components to prime-power order. Consider an elementary abelian groupGto be oftype(p,p,...,p) for some primep. Ahomocyclic group[5](of rankn) is an abelian group of type (m,m,...,m) i.e. the direct product ofnisomorphic cyclic groups of orderm, of which groups of type (pk,pk,...,pk) are a special case. Theextra special groupsare extensions of elementary abelian groups by a cyclic group of orderp,and are analogous to theHeisenberg group.
https://en.wikipedia.org/wiki/Elementary_abelian_group
Computer numerical control(CNC) is theautomated controlofmachine toolsby a computer. It is an evolution ofnumerical control(NC), where machine tools are directly managed bydata storage mediasuch aspunched cardsorpunched tape. Because CNC allows for easier programming, modification, and real-time adjustments, it has gradually replaced NC as computing costs declined.[1][2][3] A CNC machine is a motorized maneuverable tool and often a motorized maneuverable platform, which are both controlled by a computer, according to specific input instructions. Instructions are delivered to a CNC machine in the form of a sequential program of machine control instructions such asG-codeand M-code, and then executed. The program can be written by a person or, far more often, generated by graphicalcomputer-aided design(CAD) orcomputer-aided manufacturing(CAM) software. In the case of 3D printers, the part to be printed is "sliced" before the instructions (or the program) are generated. 3D printers also use G-Code.[4] CNC offers greatly increased productivity over non-computerized machining for repetitive production, where the machine must be manually controlled (e.g. using devices such as hand wheels or levers) or mechanically controlled by pre-fabricated pattern guides (seepantograph mill). However, these advantages come at significant cost in terms of both capital expenditure and job setup time. For some prototyping and smallbatchjobs, a good machine operator can have parts finished to a high standard whilst a CNC workflow is still in setup. In modern CNC systems, the design of a mechanical part and its manufacturing program are highly automated. The part's mechanical dimensions are defined using CAD software and then translated into manufacturing directives by CAM software. The resulting directives are transformed (by "post processor" software) into the specific commands necessary for a particular machine to produce the component and then are loaded into the CNC machine. Since any particular component might require the use of several different tools –drills,saws,touch probesetc. – modern machines often combine multiple tools into a single "cell". In other installations, several different machines are used with an external controller and human or robotic operators that move the component from machine to machine. In either case, the series of steps needed to produce any part is highly automated and produces a part that meets every specification in the original CAD drawing, where each specification includes a tolerance. Motion is controlling multiple axes, normally at least two (X and Y),[5]and a tool spindle that moves in the Z (depth). The position of the tool is driven by direct-drivestepper motorsorservo motorsto provide highly accurate movements, or in older designs, motors through a series of step-down gears.Open-loop controlworks as long as the forces are kept small enough and speeds are not too great. On commercialmetalworkingmachines, closed-loop controls are standard and required to provide the accuracy, speed, andrepeatabilitydemanded. As the controller hardware evolved, the mills themselves also evolved. One change has been to enclose the entire mechanism in a large box as a safety measure (with safety glass in the doors to permit the operator to monitor the machine's function), often with additional safety interlocks to ensure the operator is far enough from the working piece for safe operation. Most new CNC systems built today are 100% electronically controlled. CNC-like systems are used for any process that can be described as movements and operations. These includelaser cutting,welding,friction stir welding,ultrasonic welding, flame andplasma cutting,bending, spinning, hole-punching, pinning, gluing, fabric cutting, sewing, tape and fiber placement, routing, picking and placing, and sawing. The first NC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the tool or part to follow points fed into the system onpunched tape.[4]These earlyservomechanismswere rapidly augmented with analog and digital computers, creating the modern CNC machine tools that have revolutionized machining processes. Now the CNC in the processing manufacturing field has been very extensive, not only the traditionalmillingandturning, other machines and equipment are also installed with the corresponding CNC, which makes the manufacturing industry in its support, greatly improving the quality and efficiency. Of course, the latest trend in CNC[6]is to combine traditionalsubtractive manufacturingwithadditive manufacturing(3D printing) to create a new manufacturing method[7]- hybrid additive subtractive manufacturing (HASM).[8]Another trend is the combination ofAI, using a large number ofsensors, with the goal of achievingflexible manufacturing.[9] EDM can be broadly divided into "sinker" type processes, where the electrode is the positive shape of the resulting feature in the part, and the electric discharge erodes this feature into the part, resulting in the negative shape, and "wire" type processes. Sinker processes are rather slow as compared to conventional machining, averaging on the order of 100mm3/min,[10]as compared to 8x106mm3/min for conventional machining, but it can generate features that conventional machining cannot. Wire EDM operates by using a thin conductive wire, typically brass, as the electrode, and discharging as it runs past the part being machined. This is useful for complex profiles with inside 90 degree corners that would be challenging to machine with conventional methods. Many other tools have CNC variants, including: In CNC, a "crash" occurs when the machine moves in such a way that is harmful to the machine, tools, or parts being machined, sometimes resulting in bending or breakage of cutting tools, accessory clamps, vises, and fixtures, or causing damage to the machine itself by bending guide rails, breaking drive screws, or causing structural components to crack or deform under strain. A mild crash may not damage the machine or tools but may damage the part being machined so that it must be scrapped. Many CNC tools have no inherent sense of the absolute position of the table or tools when turned on. They must be manually "homed" or "zeroed" to have any reference to work from, and these limits are just for figuring out the location of the part to work with it and are no hard motion limit on the mechanism. It is often possible to drive the machine outside the physical bounds of its drive mechanism, resulting in a collision with itself or damage to the drive mechanism. Many machines implement control parameters limiting axis motion past a certain limit in addition to physicallimit switches. However, these parameters can often be changed by the operator. Many CNC tools also do not know anything about their working environment. Machines may have load sensing systems on spindle and axis drives, but some do not. They blindly follow the machining code provided and it is up to an operator to detect if a crash is either occurring or about to occur, and for the operator to manually abort the active process. Machines equipped with load sensors can stop axis or spindle movement in response to an overload condition, but this does not prevent a crash from occurring. It may only limit the damage resulting from the crash. Some crashes may not ever overload any axis or spindle drives. If the drive system is weaker than the machine's structural integrity, then the drive system simply pushes against the obstruction, and the drive motors "slip in place". The machine tool may not detect the collision or the slipping, so for example the tool should now be at 210mm on the X-axis, but is, in fact, at 32mm where it hit the obstruction and kept slipping. All of the next tool motions will be off by −178mm on the X-axis, and all future motions are now invalid, which may result in further collisions with clamps, vises, or the machine itself. This is common in open-loop stepper systems but is not possible in closed-loop systems unless mechanical slippage between the motor and drive mechanism has occurred. Instead, in a closed-loop system, the machine will continue to attempt to move against the load until either the drive motor goes into an overload condition or a servo motor fails to get to the desired position. Collision detection and avoidance are possible, through the use of absolute position sensors (optical encoder strips or disks) to verify that motion occurred, or torque sensors or power-draw sensors on the drive system to detect abnormal strain when the machine should just be moving and not cutting, but these are not a common component of most hobby CNC tools. Instead, most hobby CNC tools simply rely on the assumed accuracy ofstepper motorsthat rotate a specific number of degrees in response to magnetic field changes. It is often assumed the stepper is perfectly accurate and never missteps, so tool position monitoring simply involves counting the number of pulses sent to the stepper over time. An alternate means of stepper position monitoring is usually not available, so crash or slip detection is not possible. Commercial CNC metalworking machines use closed-loop feedback controls for axis movement. In a closed-loop system, the controller monitors the actual position of each axis with an absolute orincremental encoder. Proper control programming will reduce the possibility of a crash, but it is still up to the operator and programmer to ensure that the machine is operated safely. However, during the 2000s and 2010s, the software for machining simulation has been maturing rapidly, and it is no longer uncommon for the entire machine tool envelope (including all axes, spindles, chucks, turrets, tool holders, tailstocks, fixtures, clamps, and stock) to be modeled accurately with3D solid models, which allows the simulation software to predict fairly accurately whether a cycle will involve a crash. Although such simulation is not new, its accuracy and market penetration are changing considerably because of computing advancements.[12] Within the numerical systems of CNC programming, the code generator can assume that the controlled mechanism is always perfectly accurate, or that precision tolerances are identical for all cutting or movement directions. While the common use ofball screwson most modern NC machines eliminates the vast majority of backlash, it still must be taken into account. CNC tools with a large amount of mechanicalbacklashcan still be highly precise if the drive or cutting mechanism is only driven to apply cutting force from one direction, and all driving systems are pressed tightly together in that one cutting direction. However, a CNC device with high backlash and a dull cutting tool can lead to cutter chatter and possible workpiece gouging. The backlash also affects the precision of some operations involving axis movement reversals during cutting, such as the milling of a circle, where axis motion is sinusoidal. However, this can be compensated for if the amount of backlash is precisely known by linear encoders or manual measurement. The high backlash mechanism itself is not necessarily relied on to be repeatedly precise for the cutting process, but some other reference object or precision surface may be used to zero the mechanism, by tightly applying pressure against the reference and setting that as the zero references for all following CNC-encoded motions. This is similar to the manual machine tool method of clamping amicrometeronto a reference beam and adjusting theVernierdial to zero using that object as the reference.[citation needed] In numerical control systems, the position of the tool is defined by a set of instructions called thepart program. Positioning control is handled using either an open-loop or a closed-loop system. In an open-loop system, communication takes place in one direction only: from the controller to the motor. In a closed-loop system, feedback is provided to the controller so that it can correct for errors in position, velocity, and acceleration, which can arise due to variations in load or temperature. Open-loop systems are generally cheaper but less accurate. Stepper motors can be used in both types of systems, while servo motors can only be used in closed systems. The G & M code positions are all based on a three-dimensionalCartesian coordinate system. This system is a typical plane often seen in mathematics when graphing. This system is required to map out the machine tool paths and any other kind of actions that need to happen in a specific coordinate. Absolute coordinates are what are generally used more commonly for machines and represent the (0,0,0) point on the plane. This point is set on the stock material to give a starting point or "home position" before starting the actual machining. G-codesare used to command specific movements of the machine, such as machine moves or drilling functions. The majority of G-code programs start with a percent (%) symbol on the first line, then followed by an "O" with a numerical name for the program (i.e. "O0001") on the second line, then another percent (%) symbol on the last line of the program. The format for a G-code is the letter G followed by two to three digits; for example G01. G-codes differ slightly between a mill and lathe application, for example: [Code Miscellaneous Functions (M-Code)][citation needed]. M-codes are miscellaneous machine commands that do not command axis motion. The format for an M-code is the letter M followed by two to three digits; for example: Having the correct speeds and feeds in the program provides for a more efficient and smoother product run. Incorrect speeds and feeds will cause damage to the tool, machine spindle, and even the product. The quickest and simplest way to find these numbers would be to use a calculator that can be found online. A formula can also be used to calculate the proper speeds and feeds for a material. These values can be found online or inMachinery's Handbook.
https://en.wikipedia.org/wiki/Numerical_control
Incryptographyandcomputer security, alength extension attackis a type ofattackwhere an attacker can useHash(message1) and the length ofmessage1to calculateHash(message1‖message2) for an attacker-controlledmessage2, without needing to know the content ofmessage1. This is problematic when thehashis used as amessage authentication codewith constructionHash(secret‖message),[1]andmessageand the length ofsecretis known, because an attacker can include extra information at the end of the message and produce a valid hash without knowing the secret. Algorithms likeMD5,SHA-1and most ofSHA-2that are based on theMerkle–Damgård constructionare susceptible to this kind of attack.[1][2][3]Truncated versions of SHA-2, including SHA-384 andSHA-512/256are not susceptible,[4]nor is theSHA-3algorithm.[5]HMACalso uses a different construction and so is not vulnerable to length extension attacks.[6]A secret suffix MAC, which is calculated asHash(message‖secret), isn't vulnerable to a length extension attack, but is vulnerable to another attack based on a hash collision.[7] The vulnerable hashing functions work by taking the input message, and using it to transform an internal state. After all of the input has been processed, the hash digest is generated by outputting the internal state of the function. It is possible to reconstruct the internal state from the hash digest, which can then be used to process the new data. In this way, one may extend the message and compute the hash that is a valid signature for the new message. A server for delivering waffles of a specified type to a specific user at a location could be implemented to handle requests of the given format: The server would perform the request given (to deliver ten waffles of type eggo to the given location for user "1") only if the signature is valid for the user. The signature used here is aMAC, signed with a key not known to the attacker.[note 1] It is possible for an attacker to modify the request in this example by switching the requested waffle from "eggo" to "liege." This can be done by taking advantage of a flexibility in the message format if duplicate content in the query string gives preference to the latter value. This flexibility does not indicate an exploit in the message format, because the message format was never designed to be cryptographically secure in the first place, without the signature algorithm to help it. In order to sign this new message, typically the attacker would need to know the key the message was signed with, and generate a new signature by generating a new MAC. However, with a length extension attack, it is possible to feed the hash (the signature given above) into the state of the hashing function, and continue where the original request had left off, so long as the length of the original request is known. In this request, the original key's length was 14 bytes, which could be determined by trying forged requests with various assumed lengths, and checking which length results in a request that the server accepts as valid. The message as fed into the hashing function is oftenpadded, as many algorithms can only work on input messages whose lengths are a multiple of some given size. The content of this padding is always specified by the hash function used. The attacker must include all of these padding bits in their forged message before the internal states of their message and the original will line up. Thus, the attacker constructs a slightly different message using these padding rules: This message includes all of the padding that was appended to the original message inside of the hash function before their payload (in this case, a 0x80 followed by a number of 0x00s and a message length, 0x228 = 552 = (14+55)*8, which is the length of the key plus the original message, appended at the end). The attacker knows that the state behind the hashed key/message pair for the original message is identical to that of new message up to the final "&." The attacker also knows the hash digest at this point, which means they know the internal state of the hashing function at that point. It is then trivial to initialize a hashing algorithm at that point, input the last few characters, and generate a new digest which can sign his new message without the original key. By combining the new signature and new data into a new request, the server will see the forged request as a valid request due to the signature being the same as it would have been generated if the password was known.
https://en.wikipedia.org/wiki/Length_extension_attack
Inlinguistics, agrammatical categoryorgrammatical featureis a property of items within thegrammarof alanguage. Within each category there are two or more possible values (sometimes calledgrammemes), which are normally mutually exclusive. Frequently encountered grammatical categories include: Although the use of terms varies from author to author, a distinction should be made between grammatical categories and lexical categories.Lexical categories(consideredsyntactic categories) largely correspond to theparts of speechof traditional grammar, and refer to nouns, adjectives, etc. Aphonologicalmanifestation of a category value (for example, a word ending that marks "number" on a noun) is sometimes called anexponent. Grammatical relationsdefine relationships between words and phrases with certain parts of speech, depending on their position in the syntactic tree. Traditional relations includesubject,object, andindirect object. A givenconstituentof an expression can normally take only one value in each category. For example, a noun ornoun phrasecannot be both singular and plural, since these are both values of the "number" category. It can, however, be both plural and feminine, since these represent different categories (number and gender). Categories may be described and named with regard to the type ofmeaningsthat they are used to express. For example, the category oftenseusually expresses the time of occurrence (e.g. past, present or future). However, purely grammatical features do not always correspond simply or consistently to elements of meaning, and different authors may take significantly different approaches in their terminology and analysis. For example, the meanings associated with the categories of tense,aspectandmoodare often bound up in verbconjugationpatterns that do not have separate grammatical elements corresponding to each of the three categories; seeTense–aspect–mood. Categories may be marked onwordsby means ofinflection. InEnglish, for example, the number of anounis usually marked by leaving the noun uninflected if it is singular, and by adding the suffix-sif it is plural (although some nouns haveirregular plural forms). On other occasions, a category may not be marked overtly on the item to which it pertains, being manifested only through other grammatical features of the sentence, often by way of grammaticalagreement. For example: The bird can sing.The birdscan sing. In the above sentences, the number of the noun is marked by the absence or presence of the ending-s. The sheepisrunning.The sheeparerunning. In the above, the number of the noun is not marked on the noun itself (sheepdoes not inflect according to the regular pattern), but it is reflected in agreement between the noun and verb: singular number triggersis, and plural numberare. The birdissinging.The birdsaresinging. In this case the number is marked overtly on the noun, and is also reflected by verb agreement. However: The sheep can run. In this case the number of the noun (or of the verb) is not manifested at all in thesurface formof the sentence, and thus ambiguity is introduced (at least, when the sentence is viewed in isolation). Exponents of grammatical categories often appear in the same position or "slot" in the word (such asprefix,suffixorenclitic). An example of this is theLatin cases, which are all suffixal:rosa, rosae, rosae, rosam, rosa, rosā("rose", in thenominative,genitive,dative,accusative,vocativeandablative). Categories can also pertain to sentence constituents that are larger than a single word (phrases, or sometimesclauses). A phrase often inherits category values from itsheadword; for example, in the above sentences, thenoun phrasethe birdsinherits plural number from the nounbirds. In other cases such values are associated with the way in which the phrase is constructed; for example, in thecoordinatednoun phraseTom and Mary, the phrase has plural number (it would take a plural verb), even though both the nouns from which it is built up are singular. In traditional structural grammar, grammatical categories are semantic distinctions; this is reflected in a morphological or syntactic paradigm. But ingenerative grammar, which sees meaning as separate from grammar, they are categories that define the distribution of syntactic elements.[1]For structuralists such asRoman Jakobsongrammatical categories were lexemes that were based on binary oppositions of "a single feature of meaning that is equally present in all contexts of use". Another way to define a grammatical category is as a category that expresses meanings from a single conceptual domain, contrasts with other such categories, and is expressed through formally similar expressions.[2]Another definition distinguishes grammatical categories from lexical categories, such that the elements in a grammatical category have a common grammatical meaning – that is, they are part of the language's grammatical structure.[3]
https://en.wikipedia.org/wiki/Grammatical_category
Lisp Flavored Erlang(LFE) is afunctional,concurrent,garbage collected, general-purposeprogramming languageandLispdialectbuilt on CoreErlangand the Erlang virtual machine (BEAM). LFE builds on Erlang to provide a Lisp syntax for writing distributed,fault-tolerant, softreal-time, non-stop applications. LFE also extends Erlang to supportmetaprogrammingwith Lisp macros and an improved developer experience with a feature-richread–eval–print loop(REPL).[1]LFE is actively supported on all recent releases of Erlang; the oldest version of Erlang supported is R14. Initial work on LFE began in 2007, when Robert Virding started creating a prototype of Lisp running on Erlang.[2]This work was focused primarily on parsing and exploring what an implementation might look like. No version control system was being used at the time, so tracking exact initial dates is somewhat problematic.[2] Virding announced the first release of LFE on theErlang Questionsmail list in March 2008.[3]This release of LFE was very limited: it did not handle recursiveletrecs,binarys,receive, ortry; it also did not support a Lisp shell.[4] Initial development of LFE was done with version R12B-0 of Erlang[5]on a Dell XPS laptop.[4] Robert Virding has stated that there were several reasons why he started the LFE programming language:[2] Like Lisp, LFE is anexpression-oriented language. Unlike non-homoiconicprogramming languages, Lisps make no or little syntactic distinction betweenexpressionsandstatements: all code and data are written as expressions. LFE brought homoiconicity to the Erlang VM. In LFE, the list data type is written with its elements separated by whitespace, and surrounded by parentheses. For example,(list12'foo)is a list whose elements are the integers1and2, and the atom [[foo|foo]]. These values are implicitly typed: they are respectively two integers and a Lisp-specific data type called asymbolic atom, and need not be declared as such. As seen in the example above, LFE expressions are written as lists, usingprefix notation. The first element in the list is the name of aform, i.e., a function, operator, or macro. The remainder of the list are the arguments. The LFE-Erlang operators are used in the same way. The expression evaluates to 42. Unlike functions in Erlang and LFE, arithmetic operators in Lisp arevariadic(orn-ary), able to take any number of arguments. LFE haslambda, just like Common Lisp. It also, however, haslambda-matchto account for Erlang's pattern-matching abilities in anonymous function calls. This section does not represent a complete comparison between Erlang and LFE, but should give a taste. Erlang: LFE: Erlang: LFE: Or idiomatic functional style: Erlang: LFE: Erlang: LFE: or using a ``cons`` literal instead of the constructor form: Erlang: LFE: Erlang: LFE: or: Calls to Erlang functions take the form(<module>:<function> <arg1> ... <argn>): Using recursion to define theAckermann function: Composing functions: Message-passing with Erlang's light-weight "processes": Multiple simultaneous HTTP requests:
https://en.wikipedia.org/wiki/LFE_(programming_language)
Sudoku(/suːˈdoʊkuː,-ˈdɒk-,sə-/;Japanese:数独,romanized:sūdoku,lit.'digit-single'; originally calledNumber Place)[1]is alogic-based,[2][3]combinatorial[4]number-placementpuzzle. In classic Sudoku, the objective is to fill a 9 × 9 grid with digits so that each column, each row, and each of the nine 3 × 3 subgrids that compose the grid (also called "boxes", "blocks", or "regions") contains all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which for awell-posedpuzzle has a single solution. French newspapers featured similar puzzles in the 19th century, and the modern form of the puzzle first appeared in 1979puzzle booksbyDell Magazinesunder the name Number Place.[5]However, the puzzle type only began to gain widespread popularity in 1986 when it was published by the Japanese puzzle companyNikoliunder the name Sudoku, meaning "single number".[6]In newspapers outside of Japan, it first appeared inThe Conway Daily Sun(New Hampshire) in September 2004, and thenThe Times(London) in November 2004, both of which were thanks to the efforts of the Hong Kong judgeWayne Gould, who devised acomputer programto rapidly produce unique puzzles. Number puzzles appeared in newspapers in the late 19th century, when French puzzle setters began experimenting with removing numbers frommagic squares.Le Siècle, a Paris daily, published a partially completed 9×9 magic square with 3×3 subsquares on November 19, 1892.[7]It was not a Sudoku because it contained double-digit numbers and required arithmetic rather than logic to solve, but it shared key characteristics: each row, column, and subsquare added up to the same number. On July 6, 1895,Le Siècle'srival,La France, refined the puzzle so that it was almost a modern Sudoku and named itcarré magique diabolique('diabolical magic square'). It simplified the 9×9 magic square puzzle so that each row, column, andbroken diagonalscontained only the numbers 1–9, but did not mark the subsquares. Although they were unmarked, each 3×3 subsquare did indeed comprise the numbers 1–9, and the additional constraint on the broken diagonals led to only one solution.[8] These weekly puzzles were a feature of French newspapers such asL'Écho de Parisfor about a decade, but disappeared about the time ofWorld War I.[9] The modern Sudoku was most likely designed anonymously byHoward Garns, a 74-year-old retired architect and freelance puzzle constructor fromConnersville, Indiana, and first published in 1979 byDell Magazinesas Number Place (the earliest known examples of modern Sudoku).[1]Garns' name was always present on the list of contributors in issues ofDell Pencil Puzzles and Word Gamesthat included Number Place and was always absent from issues that did not.[10]He died in 1989 before getting a chance to see his creation as a worldwide phenomenon.[10]Whether or not Garns was familiar with any of the French newspapers listed above is unclear. The puzzle was introduced in Japan byMaki Kaji(鍜治 真起,Kaji Maki), president of theNikolipuzzle company, in the paperMonthly Nikolistin April 1984[10]asSūji wa dokushin ni kagiru(数字は独身に限る), which can be translated as "the digits must be single", or as "the digits are limited to one occurrence" (In Japanese,dokushinmeans an "unmarried person"). The name was later abbreviated toSudoku(数独), taking only the firstkanjiof compound words to form a shorter version.[10]"Sudoku" is a registered trademark in Japan[11]and the puzzle is generally referred to as Number Place(ナンバープレース,Nanbāpurēsu)or, more informally, a shortening of the two words, Num(ber) Pla(ce)(ナンプレ,Nanpure). In 1986, Nikoli introduced two innovations: the number of givens was restricted to no more than 32, and puzzles became "symmetrical" (meaning the givens were distributed inrotationally symmetric cells). It is now published in mainstream Japanese periodicals, such as theAsahi Shimbun. In 1997, Hong Kong judgeWayne Gouldsaw a partly completed puzzle in a Japanese bookshop. Over six years, he developed a computer program to produce unique puzzles rapidly.[5] The first newspaper outside of Japan to publish a Sudoku puzzle wasThe Conway Daily Sun(New Hampshire), which published a puzzle by Gould in September 2004.[12][13] Gould pitched the idea of publishing Sudoku puzzles to newspapers, offering the puzzles for free in exchange for the newspapers' attributing them to him and linking to his website for solutions and other puzzles. Knowing that British newspapers have a long history of publishingcrosswordsand other puzzles, he promoted Sudoku toThe Timesin Britain, which launched it on November 12, 2004 (calling it Su Doku). The first letter toThe Timesregarding Su Doku was published the following day on November 13 from Ian Payn ofBrentford, complaining that the puzzle had caused him to miss his stop on thetube.[14]Sudoku puzzles rapidly spread to other newspapers as a regular feature.[5][15] The rapid rise of Sudoku in Britain from relative obscurity to a front-page feature in national newspapers attracted commentary in the media and parody (such as whenThe Guardian'sG2section advertised itself as the first newspaper supplement with a Sudoku grid on every page).[16]Recognizing the different psychological appeals of easy and difficult puzzles,The Timesintroduced both, side by side, on June 20, 2005. From July 2005,Channel 4included a daily Sudoku game in theirteletextservice. On August 2, the BBC's program guideRadio Timesfeatured a weekly Super Sudoku with a 16×16 grid. The world's first live TV Sudoku show,Sudoku Live, was apuzzle contestfirst broadcast on July 1, 2005, on the British pay-television channelSky One. It was presented byCarol Vorderman. Nine teams of nine players (with one celebrity in each team) representing geographical regions competed to solve a puzzle. Each player had a hand-held device for entering numbers corresponding to answers for four cells. Phil Kollin ofWinchelsea, England, was the series grand prize winner, taking home over £23,000 over a series of games. The audience at home was in a separate interactive competition, which was won by Hannah Withey ofCheshire. Later in 2005, theBBClaunchedSUDO-Q, agame showthat combined Sudoku with general knowledge. However, it used only 4×4 and 6×6 puzzles. Four seasons were produced before the show ended in 2007. An annualWorld Sudoku Championshipseries has been organized by theWorld Puzzle Federationsince 2006, except in 2020 and 2021 during theCOVID-19 pandemic. In 2006, a Sudoku website published a tribute song by Australian songwriter Peter Levy, but the song download was later removed due to heavy traffic. The Japanese Embassy nominated the song for an award, and Levy claimed he was in discussions withSonyin Japan to release the song as a single.[17] Sudoku software is very popular on PCs, websites, and mobile phones. It comes with many distributions ofLinux. The software has also been released on video game consoles, such as theNintendo DS,PlayStation Portable, theGame Boy Advance,Xbox Live Arcade, theNooke-book reader, Kindle Fire tablet, severaliPodmodels, and theiPhone. ManyNokiaphones also had Sudoku. In fact, just two weeks afterApple Inc.debuted the onlineApp Storewithin itsiTunes Storeon July 11, 2008, nearly 30 different Sudoku games were already in it, created by varioussoftware developers, specifically for the iPhone and iPod Touch. Sudoku games also rapidly became available forweb browserusers and for basically all gaming, cellphone, and computer platforms. In June 2008, an Australian drugs-related jury trial costing overA$1 million was aborted when it was discovered that four or five of the twelve jurors had been playing Sudoku instead of listening to the evidence.[18] Although the 9×9 grid with 3×3 regions is by far the most common, many other variations exist. Sample puzzles can be 4×4 grids with 2×2 regions; 5×5 grids withpentominoregions have been published under the name Logi-5; theWorld Puzzle Championshiphas featured a 6×6 grid with 2×3 regions and a 7×7 grid with sixheptominoregions and a disjoint region. Larger grids are also possible, or different irregular shapes (under various names such asSuguru,Tectonic,Jigsaw Sudokuetc.).The Timesoffers a 12×12-grid "Dodeka Sudoku" with 12 regions of 4×3 squares. Dell Magazines regularly publishes 16×16 "Number Place Challenger" puzzles (using the numbers 1–16 or the letters A-P). Nikoli offers 25×25 "Sudoku the Giant" behemoths. A 100×100-grid puzzle dubbed Sudoku-zilla was published in 2010.[19] Under the name "Mini Sudoku", a 6×6 variant with 3×2 regions appears in the American newspaperUSA Todayand elsewhere. The object is the same as that of standard Sudoku, but the puzzle only uses the numbers 1 through 6. A similar form, for younger solvers of puzzles, called "The Junior Sudoku", has appeared in some newspapers, such as some editions ofThe Daily Mail. Another common variant is to add limits on the placement of numbers beyond the usual row, column, and box requirements. Often, the limit takes the form of an extra "dimension"; the most common is to require the numbers in the main diagonals of the grid to also be unique. The aforementioned "Number Place Challenger" puzzles are all of this variant, as are the Sudoku X puzzles inThe Daily Mail, which use 6×6 grids. The killer sudoku variant combines elements of sudoku andkakuro. A killer sudoku puzzle is made up of 'cages', typically depicted by boxes outlined with dashes or colours. The sum of the numbers in a cage is written in the top left corner of the cage, and numbers cannot be repeated in a cage. Puzzles constructed from more than two grids are also common. Five 9×9 grids that overlap at the corner regions in the shape of aquincunxis known in Japan asGattai5 (five merged) Sudoku. InThe Times,The Age, andThe Sydney Morning Herald, this form of puzzle is known as Samurai Sudoku.The Baltimore Sunand theToronto Starpublish a puzzle of this variant (titled High Five) in their Sunday edition. Often, no givens are placed in the overlapping regions. Sequential grids, as opposed to overlapping, are also published, with values in specific locations in grids needing to be transferred to others. A tabletop version of Sudoku can be played with a standard 81-card Set deck (seeSet game). A three-dimensional Sudoku puzzle was published inThe Daily Telegraphin May 2005.The Timesalso publishes a three-dimensional version under the name Tredoku. Also, a Sudoku version of theRubik's Cubeis namedSudoku Cube. Many other variants have been developed.[20][21][22]Some are different shapes in the arrangement of overlapping 9×9 grids, such as butterfly, windmill, or flower.[23]Others vary the logic for solving the grid. One of these is "Greater Than Sudoku". In this, a 3×3 grid of the Sudoku is given with 12 symbols of Greater Than (>) or Less Than (<) on the common line of the two adjacent numbers.[10]Another variant on the logic of the solution is "Clueless Sudoku", in which nine 9×9 Sudoku grids are each placed in a 3×3 array. The center cell in each 3×3 grid of all nine puzzles is left blank and forms a tenth Sudoku puzzle without any cell completed; hence, "clueless".[23]Examples and other variants can be found in theGlossary of Sudoku. This section refers to classic Sudoku, disregarding jigsaw, hyper, and other variants. A completed Sudoku grid is a special type ofLatin squarewith the additional property of no repeated values in any of the nine blocks (orboxesof 3×3 cells).[24] The general problem of solving Sudoku puzzles onn2×n2grids ofn×nblocks is known to beNP-complete.[25]ManySudoku solving algorithms, such asbrute force-backtracking anddancing linkscan solve most 9×9 puzzles efficiently, butcombinatorial explosionoccurs asnincreases, creating practical limits to the properties of Sudokus that can be constructed, analyzed, and solved asnincreases. A Sudoku puzzle can be expressed as agraph coloringproblem.[26]The aim is to construct a 9-coloring of a particular graph, given a partial 9-coloring. The fewest clues possible for a proper Sudoku is 17.[27]Tens of thousands of distinct Sudoku puzzles have only 17 clues.[28] The number of classic 9×9 Sudoku solution grids is 6,670,903,752,021,072,936,960, or around6.67×1021.[29]The number of essentially different solutions, whensymmetriessuch as rotation, reflection, permutation, and relabelling are taken into account, is much smaller, 5,472,730,538.[30] Unlike the number of complete Sudoku grids, the number of minimal 9×9 Sudoku puzzles is not precisely known. (A minimal puzzle is one in which no clue can be deleted without losing the uniqueness of the solution.) However, statistical techniques combined with a puzzle generator show that about (with 0.065% relative error) 3.10 × 1037minimal puzzles and 2.55 × 1025nonessentially equivalent minimal puzzles exist.[31]
https://en.wikipedia.org/wiki/Sudoku
InIslam,taqiyya(Arabic:تقیة,romanized:taqiyyah,lit.'prudence')[1][2]is the practice of dissimulation and secrecy of religious belief and practice, primarily inShia Islam.[1][3][4][5][6] Generally,taqiyyais regarded as the act of maintaining secrecy or mystifying one's beliefs when one's life or property is threatened.[7][8]The practice of concealing one's beliefs has existed since the early days of Islam; early Muslims did so to avoid persecution or violence by non-Muslim governments or individuals.[9][10] The use oftaqiyyahas varied in recent history, especially betweenSunni Muslimsand Shia Muslims. Sunni Muslims gained political supremacy over time and therefore only occasionally found the need to practicetaqiyya. On the other hand, Shia Muslims, as well asSufi Muslimsdevelopedtaqiyyaas a method of self-preservation and protection in hostile environments.[11] A related term iskitmān(lit.'action of covering'or'dissimulation'), which has a more specific meaning of dissimulation by silence or omission.[12][13]This practice is emphasized inShi'ismwhereby adherents are permitted to conceal their beliefs when under threat ofpersecutionor compulsion.[3][14] Taqiyyawas initially practiced under duress by some ofMuhammad's companions.[15]Later, it became important for Sufis, but even more so for Shias, who often experienced persecution as a religious minority.[14][16]In Shia theology,taqiyyais permissible in situations where life or property are at risk and whereby no danger to religion would occur.[14]Taqiyyahas also been politically legitimised inTwelver Shi'ism, to maintain unity among Muslims and fraternity among Shia clerics.[17][18] The termtaqiyyais derived from the Arabictriliteral rootwāw-qāf-yādenoting "caution, fear",[1]"prudence, guarding against (a danger)",[19]"carefulness, wariness".[20]In the sense of "prudence, fear" it can be used synonymously with the termstuqa(n),tuqāt,taqwá, andittiqāʾ, which are derived from the same root.[12]These terms also have other meanings. For example, the term taqwá generally means "piety" (lit.'fear of God') in an Islamic context.[21] A related term iskitmān(Arabic:كتمان), the "action of covering, dissimulation".[12]While the terms taqiyya and kitmān may be used synonymously, kitmān refers specifically to the concealment of one's convictions by silence or omission.[13]Kitman derives from Arabickatama"to conceal, to hide".[22]Ibadisused kitmān to conceal their Muslim beliefs in the face of persecution by their enemies.[23] The technical meaning of the termtaqiyyais thought[by whom?]to be derived from theQuranicreference to religious dissimulation inSura 3:28: Believers should not take disbelievers as guardians instead of the believers—and whoever does so will have nothing to hope for from Allah—unless it is a precaution against their tyranny. And Allah warns you about Himself. And to Allah is the final return. (illā antattaqūminhumtuqāt). The two wordstattaqū("you fear") andtuqāt"in fear" are derived from the same root astaqiyya, and the use oftaqiyyaabout the general principle described in this passage is first recorded in a Qur'anic gloss byMuhammad al-Bukhariin the 9th century.[citation needed] Regarding 3:28,ibn Kathirwrites, "meaning, except those believers who in some areas or times fear for their safety from the disbelievers. In this case, such believers are allowed to show friendship to the disbelievers outwardly, but never inwardly." He quotes theCompanion of the ProphetAbu al-Darda, who said "we smile in the face of some people although our hearts curse them," andHasan ibn Ali, who said, "the tuqyah is acceptable till theDay of Resurrection."[24] A similar instance of the Qur'an permitting dissimulation under compulsion is found inSurah An-Nahl16:106[25]Sunni and Shia commentators alike observe that verse 16:106 refers to the case of'Ammar b. Yasir, who was forced to renounce his beliefs under physical duress and torture.[13] The basic principle of taqiyya is agreed upon by scholars, though they tend to restrict it to dealing with non-Muslims and when under compulsion (ikrāh), while Shia jurists also allow it in interactions with Muslims and in all necessary matters (ḍarūriyāt).[26]In Sunni jurisprudence protecting one's belief during extreme or exigent circumstances is calledidtirar(إضطرار), which translates to "being forced" or "being coerced", and this word is not specific to concealing the faith; for example, under the jurisprudence ofidtirarone is allowed to consumeprohibited food(e.g. pork) to avoid starving to death.[27]Additionally, denying one's faith under duress is "only at most permitted and not under all circumstances obligatory".[28] Al-Tabaricomments on sura XVI, verse 106 (Tafsir, Bulak 1323, xxiv, 122): "If any one is compelled and professes unbelief with his tongue, while his heart contradicts him, in order to escape his enemies, no blame falls on him, because God takes his servants as their hearts believe." This verse was recorded afterAmmar Yasirwas forced by the idolaters ofMeccato recant his faith and denounce theIslamic prophetMuhammad. Al-Tabari explains that concealing one's faith is only justified if the person is in mortal danger, and even thenmartyrdomis considered a noble alternative. If threatened, it would be preferable for a Muslim to migrate to a more peaceful place where a person may practice their faith openly, "since God's earth is wide."[28]InHadith, in the Sunni commentary ofSahih al-Bukhari, known as theFath al-Bari, it is stated that:[29] أجمعوا على أن من أكره على الكفر واختار القتل أنه أعظم أجرا عند الله ممن اختار الرخصة ، وأما غير الكفر فإن أكره على أكل الخنزير وشرب الخمر مثلا فالفعل أولى Which translates to: There is a consensus that whomsoever is forced into apostasy and chooses death has a greater reward than a person who takes the license [to deny one's faith under duress], but if a person is being forced to eat pork or drink wine, then they should do that [instead of choosing death]. Al-Ghazaliwrote in hisThe Revival of the Religious Sciences: Safeguarding of a Muslim's life is a mandatory obligation that should be observed; and that lying is permissible when the shedding of a Muslim's blood is at stake. Ibn Sa'd, in his bookal-Tabaqat al-Kubra, narrates on the authority ofIbn Sirin: The Prophet (S) saw 'Ammar Ibn Yasir (ra) crying, so he (S) wiped off his (ra) tears, and said: "The nonbelievers arrested you and immersed you in water until you said such and such (i.e., bad-mouthing the Prophet (S) and praising the pagan gods to escape persecution); if they come back, then say it again." Jalal al-Dinal-Suyuti, in his bookal-Ashbah Wa al-Naza'ir, affirms that: It is acceptable (for a Muslim) to eat the meat of a dead animal at a time of great hunger (starvation to the extent that the stomach is devoid of all food); and to loosen a bite of food (for fear of choking to death) by alcohol; and to utter words of unbelief; and if one is living in an environment where evil and corruption are the pervasive norm, and permissible things (Halal) are the exception and a rarity, then one can use whatever is available to fulfill his needs. Jalal al-Dinal-Suyuti, in his bookal-Durr al-Manthoor Fi al-Tafsir al- Ma'athoor,[30]narrates that: Abd Ibn Hameed, on the authority of al-Hassan, said: "al-Taqiyya is permissible until the Day of Judgment." The practice of taqiyya is not limited to any one sect within Islam. It is observed and referenced in Sunni texts of law, hadith collections, and Quranic exegesis. Although historically more extensively practiced and referenced by Shii Muslims, taqiyya is doctrinally available to Sunni Muslims as well. This challenges the negative notion that taqiyya is exclusively associated with one community or confined to a specific group.[31] In Sunni Islamic law, as in Islamic law in general, the concept of intention (niyya) holds great importance. Merely performing an act without the right intention is considered insufficient. Afatwaissued by Ibn Abi Juma highlights the significance of one's inner state and intention in determining their identity as a Muslim. According to this fatwa, iftaqiyyais practiced with the right intention, it is not considered sinful but rather a pious act. The fatwa emphasizes that God values the intention of believers over their outward actions, and taqiyya can be seen as a form of outward expression aligned with the correct intention.[31] WhenMamunbecamecaliph(813 AD), he tried to impose his religious views on the status of the Qur'an over all his subjects, in an ordeal called themihna, or "inquisition". His views were disputed, and many of those who refused to follow his views were imprisoned, tortured, or threatened with the sword.[32]Some Sunni scholars chose to affirm Mamun's view thatthe Qur'an was created, in spite of their beliefs,[13]though a notable exception to this was scholar and theologianAhmad ibn Hanbal, who chose to endure torture instead.[33] Following the end of theReconquistaof theIberian Peninsulain 1492, Muslims were persecuted by theCatholic Monarchsandforced to convert to Christianityor face expulsion. The principle of taqiyya became very important for Muslims during theInquisitionin 16th-century Spain, as it allowed them to convert to Christianity while remainingcrypto-Muslims, practicing Islam in secret. In 1504,Ubayd Allah al-Wahrani, aMalikimuftiinOran,issued a fatwāallowing Muslims to make extensive use of concealment to maintain their faith.[5][34][35]This is seen as an exceptional case, since Islamic law prohibits conversion except in cases of mortal danger, and even then requires recantation as quickly as possible,[36]and al-Wahrani's reasoning diverged from that of the majority of earlier MalikiFaqīhssuch asAl-Wansharisi.[35] Minority Shi'a communities, since the earliest days of Islam, were often forced to practice pious circumspection (taqiyya) as an instinctive method of self-preservation and protection, an obligatory practice in the lands which became known as the realm of pious circumspection (dār al-taqiyya). Therefore, the recurring theme is that during times of danger feigning disbelief is allowed.[37] Two primary aspects of circumspection became central for the Shi'a: not disclosing their association with theImamswhen this could put them in danger and protecting theesoteric teachings of the Imamsfrom those who are unprepared to receive them. While in most instances, minority Shi'a communities employedtaqiyyausing the façade ofSunnismin Sunni-dominated societies, the principle also allows for circumspection as other faiths. For instance, GuptiIsmaili Shi'acommunities in theIndian subcontinentcircumspect asHindusto avoid caste persecution. In many cases, the practice oftaqiyyabecame deeply ingrained into practitioners' psyche. If a believer wished, he/she could adopt this practice at moments of danger, or as a lifelong process.[38] Kohlberg has coined the expression "prudentialtaqiyya" to describe caution due to fear of external enemies. It can be further categorized into two distinct forms: concealment and dissimulation. For instance, historical accounts narrate how some Imams concealed their identities as a protective measure. In one story, the Imam Jafar al-Sadiq commended the behavior of a follower who chose to avoid direct interaction with the Imam, even though he recognized him on the street, rather than exposing him, and even cursed those who would call him by his name.[37] Kohlberg identifies the second type of prudentialtaqiyyaas dissimulation, characterized by using deceptive words or actions intended to mislead opponents. It is typically employed by individuals possessing secret information. It is not solely confined to Imami Shi'ism but has been observed among various Muslim individuals or groups with minority views. During times of danger, the recurring theme is thattaqiyyapermits individuals to utter words of disbelief as a means of self-preservation. Prudentialtaqiyyais considered essential for safeguarding the faith and may be lifted when the political climate no longer poses a threat. Therefore, one way to discern the motivation behind a specific type oftaqiyyais to determine whether it ceases once the danger has subsided.[37] Kohlberg coined the expression "non-prudentialtaqiyya" for when there is a need to conceal secret doctrines from the uninitiated. Non-prudentialtaqiyyais employed by believers when they possess secret knowledge and are obligated to conceal it from those who have not attained the same level of initiation. This hidden knowledge encompasses diverse aspects, including profound insights into specific Quranic verses, interpretations of the Imam's teachings, and specific religious obligations. The obligation to conceal arises when individuals acquire such exclusive knowledge emphasizing the importance of preserving its secrecy within the initiated community.[37] If coupled with mental reservation, religious dissimulation is considered lawful in Twelver Shi'ism whenever life or property is at serious risk.[39][40]In Twelver theology,taqiyyaalso refers to hiding or safeguarding the esoteric teachings of Shia imams,[41][42][43]a practice intended to "protect the truth from those not worthy of it."[44]This esoteric knowledge (of God), taught by imams to their (true) followers, is said to distinguish them from other Muslims.[45] Historically, the Twelver doctrine oftaqiyyawas developed byMuhammad al-Baqir(d.c.732), the fifth of thetwelve imams,[46][47][48]and later by his successor,Ja'far al-Sadiq(d.765).[49]At the time, this doctrine was likely intended for the survival of Shia imams and their followers, for they were being brutally molested and persecuted.[50][51][52]Indeed,taqiyyais particularly relevant to Twelver Shias, for until about the sixteenth century they lived mostly as a minority among an often-hostile Sunni majority.[53][39]Traditions attributed to Shia imams thus encourage their followers to hide their faith for their safety, some even characterizingtaqiyyaas a pillar of faith.[50][54][55]Theological and legal statements of Shia imams were also influenced bytaqiyya.[56][41][57]For instance, al-Baqir is not known to have publicly reviled the first two caliphs, namely, Abu Bakr and Umar,[58][59]most likely because the imam exercisedtaqiyya.[60]Indeed, al-Baqir's conviction that the Islamic prophet had explicitly designated Ali ibn Abi Talib as his successor implies that Abu Bakr and Umar were usurpers.[60]More generally, whenever contradictory statements are attributed to Shia imams, those that are aligned with Sunni positions are discarded, for Shia scholars argue that such statements must have been uttered undertaqiyya.[57] States People Centers Other For theIsmailisin the aftermath of theMongolonslaught of theAlamut statein 1256 CE, the need to practice taqiyya became necessary, not only for the protection of the community itself, which was now stateless, but also for safeguarding the line of theNizariIsmaili Imamateduring this period of unrest.[61]Accordingly, the ShiaImamJa'far al-Sadiqstated "Taqiyya is my religion and the religion of my ancestors",[62]a tradition recorded in various sources includingKitāb al-Maḥāsinof Aḥmad b. Muhammad al-Barqī and theDa'ā'im al-Islāmofal-Qāḍī al-Nu'mān.[63] Such periods in which the Imams are concealed are known assatr, however the term may also refer to times when the Imams were not physically hidden from view but rather when the community was required to practice precautionary dissimulation. Duringsatrthe Imam could only be accessed by his community and in extremely dangerous circumstances, would be accessible only to the highest-ranking members of the Ismaili hierarchy (ḥudūd), whose function it was to transmit the teachings of the Imam to the community. Shi'a Imam Ja'far al-Sadiq is reputed to have said, "Our teaching is the truth, the truth of the truth; it is the exoteric and the esoteric, and the esoteric of the esoteric; it is the secret and the secret of a secret, a protected secret, hidden by a secret."[38]The Fatimid Imam-Caliphal-Hakimexpresses the sentiment oftaqiyyawhen he confides to his followers that "if any religion is stronger than you, follow it, but keep me in your hearts."[38] According to Shia scholar Muhammad Husain Javari Sabinal, Shiism would not have spread at all if not for taqiyya, referring to instances where Shia have been ruthlessly persecuted by the Sunni political elite during theUmayyadandAbbasidempires.[64]Indeed, for the Ismailis, the persistence and prosperity of the community today owes largely to the careful safeguarding of the beliefs and teachings of the Imams during theIlkhanate, theSafawiddynasty, and other periods of persecution.[citation needed]The 16th century Ismaili author Khwāja Muḥammad Riḍā b. Sulṭān Ḥusayn, also known as Khayrkhvah-i Harati, referring to theAnjudanperiod, writes about the end of an era oftaqiyya. He explains that thus far "a veil was drawn over the visage of truth," but now the Imam "allowed the veil to be lifted". Since the Imam had allowed written correspondence with his followers, he had effectively ended the era oftaqiyya.[65] The Gupti community viewed the Aga Khan III as their spiritual leader and Imam, but concealed these beliefs to protect themselves. However, the Guptis used a unique form of taqiyya, they did not appear as Sunni, Sufi, or Ithna ashari, which were the more common identities to take on. Rather they identified as Hindus, and this became a significant aspect of who they were. The Guptis view theirtaqiyyaas a fulfillment and culmination of their outwardly professed faith, rather than contrary to it. The name 'Gupta' in Sanskrit, means secret or hidden, which perfectly embodies the concealment of their faith and true identity.[38] Alawitebeliefs have never been confirmed by their modern religious authorities.[66]Alawites tend to conceal their beliefs (taqiyya) due to historical persecution.[67]Some tenets of the faith are secret, known only to a select few;[68][69]therefore, they have been described as amysticalsect.[70]Alawites celebrateIslamic festivalsbut consider the most important one to beEid al-Ghadir. Because of theDruze's Ismaili Shia origin, they have also been associated with taqiyya. When the Druze were a minority being persecuted they took the appearance of another religion externally, usually the ruling religion in the area, and for the most part adhered to Muslim customs by this practice.[71] In the early 21st century, taqiyya has become the subject of debate. According to S. Jonathon O'Donnell, some theories posit "the idea that Muslims have a religious duty to deceive non-Muslims if it furthers the cause" of Islam. He argues the "claim rests on a misreading of the concept oftaqiyya, by which believers may conceal their faith if under threat of violence. This misreading is widely deployed in Islamophobic writings."[72]The term has been used by writers andcounter-jihadistssuch asPatrick Sookhdeo, who posit that Muslims use the doctrine as a key strategy in theIslamizationof Western countries by hiding their true violent intents.[73][74] In 2008Raymond Ibrahimpublished inJane's Islamic Affairs Analystan article titled "Islam's doctrines of deception".[75]Ibrahim presented his own translation[76]of part of LebaneseDruzescholarSami Makarem's monographAl Taqiyya Fi Al Islam("Dissimulation in Islam"). Ibrahim quoted: Taqiyya is of fundamental importance in Islam. Practically every Islamic sect agrees to it and practices it ... We can go so far as to say that the practice of taqiyya is mainstream in Islam, and that those few sects not practicing it diverge from the mainstream ... Taqiyya is very prevalent in Islamic politics, especially in the modern era.[75][76][77] Michael Ryan,[78]also inJane's, characterized Ibrahim's article as "well-researched, factual in places but ... ultimately misleading".[79][77]Ibrahim responded in 2009 with "Taqiyya Revisited: A Response to the Critics", on his blog and on theMiddle East Forumwebsite.[78][80]Ibrahim was again criticised for his view on Taqiyya in 2019, by Islamic scholarUsama Hasanin theJewish Chronicle.[81]Ibrahim also responded to Hasan in aFrontPage Magazinearticle titled "Taqiyya Sunset: Exposing the Darkness Shrouding Islamic Deceit." Stefan Wimmer argues that taqiyya is not a tool to deceive non-Muslims and spread Islam, but instead a defensive mechanism to save one's life when it is in great danger (giving the example of theReconquista).[82]Similar views are shown by Jakob Skovgaard-Petersen from theUniversity of Copenhagen.[83]
https://en.wikipedia.org/wiki/Taqiyya
In computer science, atrie(/ˈtraɪ/,/ˈtriː/ⓘ), also known as adigital treeorprefix tree,[1]is a specializedsearch treedata structure used to store and retrieve strings from a dictionary or set. Unlike abinary search tree, nodes in a trie do not store their associated key. Instead, each node'spositionwithin the trie determines its associated key, with the connections between nodes defined by individualcharactersrather than the entire key. Tries are particularly effective for tasks such as autocomplete, spell checking, and IP routing, offering advantages overhash tablesdue to their prefix-based organization and lack of hash collisions. Every child node shares a commonprefixwith its parent node, and the root node represents theempty string. While basic trie implementations can be memory-intensive, various optimization techniques such as compression and bitwise representations have been developed to improve their efficiency. A notable optimization is theradix tree, which provides more efficient prefix-based storage. While tries commonly store character strings, they can be adapted to work with any ordered sequence of elements, such aspermutationsof digits or shapes. A notable variant is thebitwise trie, which uses individualbitsfrom fixed-length binary data (such asintegersormemory addresses) as keys. The idea of a trie for representing a set of strings was first abstractly described byAxel Thuein 1912.[2][3]Tries were first described in a computer context by René de la Briandais in 1959.[4][3][5]: 336 The idea was independently described in 1960 byEdward Fredkin,[6]who coined the termtrie, pronouncing it/ˈtriː/(as "tree"), after the middle syllable ofretrieval.[7][8]However, other authors pronounce it/ˈtraɪ/(as "try"), in an attempt to distinguish it verbally from "tree".[7][8][3] Tries are a form of string-indexed look-up data structure, which is used to store a dictionary list of words that can be searched on in a manner that allows for efficient generation ofcompletion lists.[9][10]: 1A prefix trie is anordered treedata structure used in the representation of a set of strings over a finite alphabet set, which allows efficient storage of words with common prefixes.[1] Tries can be efficacious onstring-searching algorithmssuch aspredictive text,approximate string matching, andspell checkingin comparison to binary search trees.[11][8][12]: 358A trie can be seen as a tree-shapeddeterministic finite automaton.[13] Tries support various operations: insertion, deletion, and lookup of a string key. Tries are composed of nodes that contain links, which either point to other suffix child nodes ornull. As for every tree, each node but the root is pointed to by only one other node, called itsparent. Each node contains as many links as the number of characters in the applicablealphabet(although tries tend to have a substantial number of null links). In some cases, the alphabet used is simply that of thecharacter encoding—resulting in, for example, a size of 256 in the case of (unsigned)ASCII.[14]: 732 The null links within the children of a node emphasize the following characteristics:[14]: 734[5]: 336 A basicstructure typeof nodes in the trie is as follows;Node{\displaystyle {\text{Node}}}may contain an optionalValue{\displaystyle {\text{Value}}}, which is associated with each key stored in the last character of string, or terminal node. Searching for a value in a trie is guided by the characters in the search string key, as each node in the trie contains a corresponding link to each possible character in the given string. Thus, following the string within the trie yields the associated value for the given string key. A null link during the search indicates the inexistence of the key.[14]: 732-733 The following pseudocode implements the search procedure for a given stringkeyin a rooted triex.[15]: 135 In the above pseudocode,xandkeycorrespond to the pointer of trie's root node and the string key respectively. The search operation, in a standard trie, takesO(dm){\displaystyle O({\text{dm}})}time, wherem{\displaystyle {\text{m}}}is the size of the string parameterkey{\displaystyle {\text{key}}}, andd{\displaystyle {\text{d}}}corresponds to thealphabet size.[16]: 754Binary search trees, on the other hand, takeO(mlog⁡n){\displaystyle O(m\log n)}in the worst case, since the search depends on the height of the tree (log⁡n{\displaystyle \log n}) of the BST (in case ofbalanced trees), wheren{\displaystyle {\text{n}}}andm{\displaystyle {\text{m}}}being number of keys and the length of the keys.[12]: 358 The trie occupies less space in comparison with a BST in the case of a large number of short strings, since nodes share common initial string subsequences and store the keys implicitly.[12]: 358The terminal node of the tree contains a non-null value, and it is a searchhitif the associated value is found in the trie, and searchmissif it is not.[14]: 733 Insertion into trie is guided by using thecharacter setsas indexes to the children array until the last character of the string key is reached.[14]: 733-734Each node in the trie corresponds to one call of theradix sortingroutine, as the trie structure reflects the execution of pattern of the top-down radix sort.[15]: 135 If a null link is encountered prior to reaching the last character of the string key, a new node is created (line 3).[14]: 745The value of the terminal node is assigned to the input value; therefore, if the former was non-null at the time of insertion, it is substituted with the new value. Deletion of akey–value pairfrom a trie involves finding the terminal node with the corresponding string key, marking the terminal indicator and value tofalseand null correspondingly.[14]: 740 The following is arecursiveprocedure for removing a stringkeyfrom rooted trie (x). The procedure begins by examining thekey; null denotes the arrival of a terminal node or end of a string key. If the node is terminal it has no children, it is removed from the trie (line 14). However, an end of string key without the node being terminal indicates that the key does not exist, thus the procedure does not modify the trie. The recursion proceeds by incrementingkey's index. A trie can be used to replace ahash table, over which it has the following advantages:[12]: 358 However, tries are less efficient than a hash table when the data is directly accessed on asecondary storage devicesuch as a hard disk drive that has higherrandom accesstime than themain memory.[6]Tries are also disadvantageous when the key value cannot be easily represented as string, such asfloating point numberswhere multiple representations are possible (e.g. 1 is equivalent to 1.0, +1.0, 1.00, etc.),[12]: 359however it can be unambiguously represented as abinary numberinIEEE 754, in comparison totwo's complementformat.[17] Tries can be represented in several ways, corresponding to different trade-offs between memory use and speed of the operations.[5]: 341Using a vector of pointers for representing a trie consumes enormous space; however, memory space can be reduced at the expense of running time if asingly linked listis used for each node vector, as most entries of the vector containsnil{\displaystyle {\text{nil}}}.[3]: 495 Techniques such asalphabet reductionmay reduce the large space requirements by reinterpreting the original string as a longer string over a smaller alphabet i.e. a string ofnbytes can alternatively be regarded as a string of2nfour-bit unitsand stored in a trie with 16 instead of 256 pointers per node. Although this can reduce memory usage by up to a factor of eight, lookups need to visit twice as many nodes in the worst case.[5]: 347–352Other techniques include storing a vector of 256 ASCII pointers as a bitmap of 256 bits representing ASCII alphabet, which reduces the size of individual nodes dramatically.[18] Bitwise tries are used to address the enormous space requirement for the trie nodes in a naive simple pointer vector implementations. Each character in the string key set is represented via individual bits, which are used to traverse the trie over a string key. The implementations for these types of trie usevectorizedCPU instructions tofind the first set bitin a fixed-length key input (e.g.GCC's__builtin_clz()intrinsic function). Accordingly, the set bit is used to index the first item, or child node, in the 32- or 64-entry based bitwise tree. Search then proceeds by testing each subsequent bit in the key.[19] This procedure is alsocache-localandhighly parallelizabledue toregisterindependency, and thus performant onout-of-order executionCPUs.[19] Radix tree, also known as acompressed trie, is a space-optimized variant of a trie in which any node with only one child gets merged with its parent; elimination of branches of the nodes with a single child results in better metrics in both space and time.[20][21]: 452This works best when the trie remains static and set of keys stored are very sparse within their representation space.[22]: 3–16 One more approach is to "pack" the trie, in which a space-efficient implementation of a sparse packed trie applied to automatichyphenation, in which the descendants of each node may be interleaved in memory.[8] Patricia trees are a particular implementation of the compressed binary trie that uses thebinary encodingof the string keys in its representation.[23][15]: 140Every node in a Patricia tree contains an index, known as a "skip number", that stores the node's branching index to avoid empty subtrees during traversal.[15]: 140-141A naive implementation of a trie consumes immense storage due to larger number of leaf-nodes caused by sparse distribution of keys; Patricia trees can be efficient for such cases.[15]: 142[24]: 3 A representation of a Patricia tree is shown to the right. Each index value adjacent to the nodes represents the "skip number"—the index of the bit with which branching is to be decided.[24]: 3The skip number 1 at node 0 corresponds to the position 1 in the binary encoded ASCII where the leftmost bit differed in the key setX.[24]: 3-4The skip number is crucial for search, insertion, and deletion of nodes in the Patricia tree, and abit maskingoperation is performed during every iteration.[15]: 143 Trie data structures are commonly used inpredictive textorautocompletedictionaries, andapproximate matching algorithms.[11]Tries enable faster searches, occupy less space, especially when the set contains large number of short strings, thus used inspell checking, hyphenation applications andlongest prefix matchalgorithms.[8][12]: 358However, if storing dictionary words is all that is required (i.e. there is no need to store metadata associated with each word), a minimal deterministic acyclic finite state automaton (DAFSA) or radix tree would use less storage space than a trie. This is because DAFSAs and radix trees can compress identical branches from the trie which correspond to the same suffixes (or parts) of different words being stored. String dictionaries are also utilized innatural language processing, such as findinglexiconof atext corpus.[25]: 73 Lexicographic sortingof a set of string keys can be implemented by building a trie for the given keys and traversing the tree inpre-orderfashion;[26]this is also a form ofradix sort.[27]Tries are also fundamental data structures forburstsort, which is notable for being the fastest string sorting algorithm as of 2007,[28]accomplished by its efficient use of CPUcache.[29] A special kind of trie, called asuffix tree, can be used to index allsuffixesin a text to carry out fast full-text searches.[30] A specialized kind of trie called a compressed trie, is used inweb search enginesfor storing theindexes- a collection of all searchable words.[31]Each terminal node is associated with a list ofURLs—called occurrence list—to pages that match the keyword. The trie is stored in the main memory, whereas the occurrence is kept in an external storage, frequently in largeclusters, or the in-memory index points to documents stored in an external location.[32] Tries are used inBioinformatics, notably insequence alignmentsoftware applications such asBLAST, which indexes all the different substring of lengthk(calledk-mers) of a text by storing the positions of their occurrences in a compressed trie sequence databases.[25]: 75 Compressed variants of tries, such as databases for managingForwarding Information Base(FIB), are used in storingIP address prefixeswithinroutersandbridgesfor prefix-based lookup to resolvemask-basedoperations inIP routing.[25]: 75
https://en.wikipedia.org/wiki/Prefix_tree
VOMSis an acronym used forVirtual Organization Membership Serviceingrid computing. It is structured as a simple account database with fixed formats for the information exchange and features single login, expiration time, backward compatibility, and multiple virtual organizations. The database is manipulated by authorization data that defines specific capabilities and roles for users. Administrative tools can be used by administrators to assign roles and capability information in the database. A command-line tool allows users to generate a local proxy credential based on the contents of the VOMS database. This credential includes the basic authentication information that standard Grid proxy credentials contain, but it also includes role and capability information from the VOMS server. VOMS-aware applications can use the VOMS data to make authentication decisions regarding user requests. VOMS was originally developed by the European DataGrid andEnabling Grids for E-sciencEprojects and is now maintained by the Italian National Institute for Nuclear Physics (INFN). VOMSis also an acronym forVOucher Management Systemused for providing recharge management services for Prepaid Systems of Telecom Service Providers. Typically external Voucher Management Systems are used withIntelligent Networkbased prepaid systems. Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Voms
Thetable of chords, created by theGreekastronomer, geometer, and geographerPtolemyinEgyptduring the 2nd century AD, is atrigonometric tablein Book I, chapter 11 of Ptolemy'sAlmagest,[1]a treatise onmathematical astronomy. It is essentially equivalent to a table of values of thesinefunction. It was the earliest trigonometric table extensive enough for many practical purposes, including those of astronomy (an earlier table of chords byHipparchusgave chords only for arcs that were multiples of⁠7+1/2⁠° =⁠π/24⁠radians).[2]Since the 8th and 9th centuries, the sine and other trigonometric functions have been used in Islamic mathematics and astronomy, reforming the production of sine tables.[3]KhwarizmiandHabash al-Hasiblater produced a set of trigonometric tables. Achordof acircleis a line segment whose endpoints are on the circle. Ptolemy used a circle whose diameter is 120 parts. He tabulated the length of a chord whose endpoints are separated by an arc ofndegrees, fornranging from⁠1/2⁠to 180 by increments of⁠1/2⁠. In modern notation, the length of the chord corresponding to an arc ofθdegrees is Asθgoes from 0 to 180, the chord of aθ° arc goes from 0 to 120. For tiny arcs, the chord is to the arc angle in degrees asπis to 3, or more precisely, the ratio can be made as close as desired to⁠π/3⁠≈1.04719755by makingθsmall enough. Thus, for the arc of⁠1/2⁠°, the chord length is slightly more than the arc angle in degrees. As the arc increases, the ratio of the chord to the arc decreases. When the arc reaches60°, the chord length is exactly equal to the number of degrees in the arc, i.e. chord 60° = 60. For arcs of more than 60°, the chord is less than the arc, until an arc of 180° is reached, when the chord is only 120. The fractional parts of chord lengths were expressed insexagesimal(base 60) numerals. For example, where the length of a chord subtended by a 112° arc is reported to be 99,29,5, it has a length of rounded to the nearest⁠1/602⁠.[1] After the columns for the arc and the chord, a third column is labeled "sixtieths". For an arc ofθ°, the entry in the "sixtieths" column is This is the average number of sixtieths of a unit that must be added to chord(θ°) each time the angle increases by one minute of arc, between the entry forθ° and that for (θ+⁠1/2⁠)°. Thus, it is used forlinear interpolation. Glowatzki and Göttsche showed that Ptolemy must have calculated chords to five sexigesimal places in order to achieve the degree of accuracy found in the "sixtieths" column.[4][5] Chapter 10 of Book I of theAlmagestpresentsgeometrictheorems used for computing chords. Ptolemy used geometric reasoning based on Proposition 10 of Book XIII ofEuclid'sElementsto find the chords of 72° and 36°. ThatPropositionstates that if an equilateralpentagonis inscribed in a circle, then the area of the square on the side of the pentagon equals the sum of the areas of the squares on the sides of thehexagonand thedecagoninscribed in the same circle. He usedPtolemy's theoremon quadrilaterals inscribed in a circle to derive formulas for the chord of a half-arc, the chord of the sum of two arcs, and the chord of a difference of two arcs. The theorem states that for aquadrilateralinscribed in acircle, the product of the lengths of the diagonals equals the sum of the products of the two pairs of lengths of opposite sides. The derivations of trigonometric identities rely on acyclic quadrilateralin which one side is a diameter of the circle. To find the chords of arcs of 1° and⁠1/2⁠° he used approximations based onAristarchus's inequality. The inequality states that for arcsαandβ, if 0 <β<α< 90°, then Ptolemy showed that for arcs of 1° and⁠1/2⁠°, the approximations correctly give the first two sexagesimal places after the integer part. Gerald J. Toomerin his translation of the Almagest gives seven entries where some manuscripts have scribal errors, changing one "digit" (one letter, see below).Glenn Elerthas made a comparison between Ptolemy's values and the true values (120 times the sine of half the angle) and has found that theroot mean squareerror is 0.000136. But much of this is simply due to rounding off to the nearest 1/3600, since this equals 0.0002777... There are nevertheless many entries where the last "digit" is off by 1 (too high or too low) from the best rounded value. Ptolemy's values are often too high by 1 in the last place, and more so towards the higher angles. The largest errors are about 0.0004, which still corresponds to an error of only 1 in the lastsexagesimaldigit.[6] Lengths of arcs of the circle, in degrees, and the integer parts of chord lengths, were expressed in abase 10numeral systemthat used 21 of the letters of theGreek alphabetwith the meanings given in the following table, and a symbol, "∠′", that means⁠1/2⁠and a raised circle "○" that fills a blank space (effectively representing zero). Three of the letters, labeled "archaic" in the table below, had not been in use in the Greek language for some centuries before theAlmagestwas written, but were still in use as numerals andmusical notes. Thus, for example, an arc of⁠143+1/2⁠° is expressed asρμγ∠′. (As the table only reaches 180°, the Greek numerals for 200 and above are not used.) The fractional parts of chord lengths required great accuracy, and were given insexagesimalnotation in two columns in the table: The first column gives an integer multiple of⁠1/60⁠, in the range 0–59, the second an integer multiple of⁠1/602⁠=⁠1/3600⁠, also in the range 0–59. Thus in Heiberg'sedition of theAlmagestwith the table of chords on pages 48–63, the beginning of the table, corresponding to arcs from⁠1/2⁠°to⁠7+1/2⁠°,looks like this: Later in the table, one can see the base-10 nature of the numerals expressing the integer parts of the arc and the chord length. Thus an arc of 85° is written asπε(πfor 80 andεfor 5) and not broken down into 60 + 25. The corresponding chord length is 81 plus a fractional part. The integer part begins withπα, likewise not broken into 60 + 21. But the fractional part,460+15602{\textstyle {\tfrac {4}{60}}+{\tfrac {15}{60^{2}}}}, is written asδ, for 4, in the⁠1/60⁠column, followed byιε, for 15, in the⁠1/602⁠column. The table has 45 lines on each of eight pages, for a total of 360 lines.
https://en.wikipedia.org/wiki/Ptolemy%27s_table_of_chords
Instatistics, themean signed difference(MSD),[1]also known asmean signed deviation,mean signed error, ormean bias error[2]is a samplestatisticthat summarizes how well a set of estimatesθ^i{\displaystyle {\hat {\theta }}_{i}}match the quantitiesθi{\displaystyle \theta _{i}}that they are supposed to estimate. It is one of a number of statistics that can be used to assess an estimation procedure, and it would often be used in conjunction with a sample version of themean square error. For example, suppose alinear regressionmodel has been estimated over a sample of data, and is then used to extrapolate predictions of thedependent variableout of sample after the out-of-sample data points have become available. Thenθi{\displaystyle \theta _{i}}would be thei-th out-of-sample value of the dependent variable, andθ^i{\displaystyle {\hat {\theta }}_{i}}would be its predicted value. The mean signed deviation is the average value ofθ^i−θi.{\displaystyle {\hat {\theta }}_{i}-\theta _{i}.} The mean signed difference is derived from a set ofnpairs,(θ^i,θi){\displaystyle ({\hat {\theta }}_{i},\theta _{i})}, whereθ^i{\displaystyle {\hat {\theta }}_{i}}is an estimate of the parameterθ{\displaystyle \theta }in a case where it is known thatθ=θi{\displaystyle \theta =\theta _{i}}. In many applications, all the quantitiesθi{\displaystyle \theta _{i}}will share a common value. When applied toforecastingin atime series analysiscontext, a forecasting procedure might be evaluated using the mean signed difference, withθ^i{\displaystyle {\hat {\theta }}_{i}}being the predicted value of a series at a givenlead timeandθi{\displaystyle \theta _{i}}being the value of the series eventually observed for that time-point. The mean signed difference is defined to be The mean signed difference is often useful when the estimationsθi^{\displaystyle {\hat {\theta _{i}}}}are biased from the true valuesθi{\displaystyle \theta _{i}}in a certain direction. If the estimator that produces theθi^{\displaystyle {\hat {\theta _{i}}}}values is unbiased, thenMSD⁡(θi^)=0{\displaystyle \operatorname {MSD} ({\hat {\theta _{i}}})=0}. However, if the estimationsθi^{\displaystyle {\hat {\theta _{i}}}}are produced by abiased estimator, then the mean signed difference is a useful tool to understand the direction of the estimator's bias. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mean_signed_deviation
Inmathematics,extrapolationis a type ofestimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar tointerpolation, which produces estimates between known observations, but extrapolation is subject to greateruncertaintyand a higher risk of producing meaningless results. Extrapolation may also mean extension of amethod, assuming similar methods will be applicable. Extrapolation may also apply to humanexperienceto project, extend, or expand known experience into an area not known or previously experienced. By doing so, one makes an assumption of the unknown[1](for example, a driver may extrapolate road conditions beyond what is currently visible and these extrapolations may be correct or incorrect). The extrapolation method can be applied in theinterior reconstructionproblem. A sound choice of which extrapolation method to apply relies ona priori knowledgeof the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods.[2]Crucial questions are, for example, if the data can be assumed to be continuous, smooth, possibly periodic, etc. Linear extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data. If the two data points nearest the pointx∗{\displaystyle x_{*}}to be extrapolated are(xk−1,yk−1){\displaystyle (x_{k-1},y_{k-1})}and(xk,yk){\displaystyle (x_{k},y_{k})}, linear extrapolation gives the function: (which is identical tolinear interpolationifxk−1<x∗<xk{\displaystyle x_{k-1}<x_{*}<x_{k}}). It is possible to include more than two points, and averaging the slope of the linear interpolant, byregression-like techniques, on the data points chosen to be included. This is similar tolinear prediction. A polynomial curve can be created through the entire known data or just near the end (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means ofLagrange interpolationor using Newton's method offinite differencesto create aNewton seriesthat fits the data. The resulting polynomial may be used to extrapolate the data. High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values; an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related toRunge's phenomenon. Aconic sectioncan be created using five points near the end of the known data. If the conic section created is anellipseorcircle, when extrapolated it will loop back and rejoin itself. An extrapolatedparabolaorhyperbolawill not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer. French curveextrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors.[3]This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies.[4] Can be created with 3 points of a sequence and the "moment" or "index", this type of extrapolation have 100% accuracy in predictions in a big percentage of known series database (OEIS).[5] Example of extrapolation with error prediction : Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth functionwill be poorly extrapolated. In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces.[6] Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncatedpower seriesrepresentations of sin(x) and relatedtrigonometric functions. For instance, taking only data from near thex= 0, we may estimate that the function behaves as sin(x) ~x. In the neighborhood ofx= 0, this is an excellent estimate. Away fromx= 0 however, the extrapolation moves arbitrarily away from thex-axis while sin(x) remains in theinterval[−1,1]. I.e., the error increases without bound. Taking more terms in the power series of sin(x) aroundx= 0 will produce better agreement over a larger interval nearx= 0, but will produce extrapolations that eventually diverge away from thex-axis even faster than the linear approximation. This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior. Incomplex analysis, a problem of extrapolation may be converted into aninterpolationproblem by the change of variablez^=1/z{\displaystyle {\hat {z}}=1/z}. This transform exchanges the part of thecomplex planeinside theunit circlewith the part of the complex plane outside of the unit circle. In particular, thecompactificationpoint at infinityis mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for examplepolesand othersingularities, at infinity that were not evident from the sampled data. Another problem of extrapolation is loosely related to the problem ofanalytic continuation, where (typically) apower seriesrepresentation of afunctionis expanded at one of its points ofconvergenceto produce apower serieswith a largerradius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region. Again,analytic continuationcan be thwarted byfunctionfeatures that were not evident from the initial data. Also, one may usesequence transformationslikePadé approximantsandLevin-type sequence transformationsas extrapolation methods that lead to asummationofpower seriesthat are divergent outside the originalradius of convergence. In this case, one often obtainsrational approximants. Extrapolation arguments are informal and unquantified arguments which assert that something is probably true beyond the range of values for which it is known to be true. For example, we believe in the reality of what we see through magnifying glasses because it agrees with what we see with the naked eye but extends beyond it; we believe in what we see through light microscopes because it agrees with what we see through magnifying glasses but extends beyond it; and similarly for electron microscopes. Such arguments are widely used in biology in extrapolating from animal studies to humans and from pilot studies to a broader population.[7] Likeslippery slopearguments, extrapolation arguments may be strong or weak depending on such factors as how far the extrapolation goes beyond the known range.[8]
https://en.wikipedia.org/wiki/Extrapolation
Ineconomics,vendor lock-in, also known asproprietary lock-inorcustomer lock-in, makes a customer dependent on avendorforproducts, unable to use another vendor without substantialswitching costs. The use ofopen standardsand alternative options makes systems tolerant of change, so that decisions can be postponed until more information is available or unforeseen events are addressed. Vendor lock-in does the opposite: it makes it difficult to move from one solution to another. Lock-in costs that createbarriers to market entrymay result inantitrustaction against amonopoly. This class of lock-in is potentially technologically hard to overcome if the monopoly is held up by barriers to market that are nontrivial to circumvent, such as patents, secrecy, cryptography or other technical hindrances. This class of lock-in is potentially inescapable to rational individuals not otherwise motivated, by creating aprisoner's dilemma—if the cost to resist is greater than the cost of joining, then the locally optimal choice is to join—a barrier that takes cooperation to overcome. The distributive property (cost to resist the locally dominant choice) alone is not anetwork effect, for lack of anypositive feedback; however, the addition ofbistabilityper individual, such as by a switching cost, qualifies as a network effect, by distributing this instability to the collective as a whole. As defined byThe Independent, this is a non-monopoly (mere technology), collective (on a society level) kind of lock-in:[1] Technological lock-in is the idea that the more a society adopts a certain technology, the more unlikely users are to switch. Examples: Technology lock-in, as defined, is strictly of the collective kind. However, the personal variant is also a possiblepermutationof the variations shown in the table, but with no monopoly and no collectivity, it would be expected to be the weakest lock-in. Equivalent personal examples: There exist lock-in situations that are both monopolistic and collective. Having the worst of two worlds, these can be very hard to escape — in many examples, the cost to resist incurs some level of isolation from the (dominating technology in) society, which can be socially costly, yet direct competition with the dominant vendor is hindered by compatibility. As one blogger expressed:[3] If I stopped using Skype, I'd lose contact with many people, because it's impossible to make them all change to[other]software. WhileMP3is patent-free as of 2017, in 2001 it was both patented and entrenched, as noted byRichard Stallmanin that year (in justifying a lax license forOgg Vorbis):[4] there is […] the danger that people will settle on MP3 format even though it is patented, and we won't be *allowed* to write free encoders for the most popular format. […] Ordinarily, if someone decides not to use a copylefted program because the license doesn't please him, that's his loss not ours. But if he rejects the Ogg/Vorbis code because of the license, and uses MP3 instead, then the problem rebounds on us—because his continued use of MP3 may help MP3 to become and stay entrenched. More examples: TheEuropean Commission, in its March 24, 2004 decision on Microsoft's business practices,[5]quotes, in paragraph 463, Microsoft general manager forC++development Aaron Contorer as stating in a February 21, 1997 internal Microsoft memo drafted forBill Gates: "TheWindows APIis so broad, so deep, and so functional that mostISVs[independent software vendors] would be crazy not to use it. And it is so deeply embedded in the source code of many Windows apps that there is a huge switching cost to using a different operating system instead. It is this switching cost that has given customers the patience to stick with Windows through all our mistakes, our buggy drivers, our highTCO[total cost of ownership], our lack of a sexy vision at times, and many other difficulties. […] Customers constantly evaluate other desktop platforms, [but] it would be so much work to move over that they hope we just improve Windows rather than force them to move. In short, without this exclusive franchise called the Windows API, we would have been dead a long time ago. The Windows franchise is fueled by application development which is focused on our core APIs." Microsoft's application software also exhibits lock-in through the use of proprietaryfile formats.Microsoft Outlookuses a proprietary, publicly undocumented datastore format. Present versions of Microsoft Word have introduced a new formatMS-OOXML. This may make it easier for competitors to write documents compatible with Microsoft Office in the future by reducing lock-in.[citation needed]Microsoft released full descriptions of the file formats for earlier versions of Word, Excel and PowerPoint in February 2008.[6] Prior to March 2009, digital music files withdigital rights management(DRM) were available for purchase from theiTunes Store, encoded in a proprietary derivative of theAACformat that used Apple'sFairPlayDRM system. These files are compatible only with Apple'siTunesmedia player software onMacsandWindows, theiriPodportable digital music players,iPhonesmartphones,iPadtablet computers, and theMotorolaROKR E1andSLVRmobile phones. As a result, that music was locked into this ecosystem and available for portable use only through the purchase of one of the above devices,[7]or by burning toCDand optionally re-ripping to a DRM-free format such asMP3orWAV. In January 2005, aniPodpurchaser named Thomas Slattery filed a suit against Apple for the "unlawful bundling" of theiriTunes Music Storeand iPod device. He stated in his brief: "Apple has turned an open and interactive standard into an artifice that prevents consumers from using the portable hard drive digital music player of their choice." At the time, Apple was stated to have an 80% market share of digital music sales and a 90% share of sales of new music players, which he claimed allowed Apple to horizontally leverage its dominant positions in both markets to lock consumers into its complementary offerings.[8]In September 2005, U.S. District JudgeJames WareapprovedSlattery v. Apple Computer Inc.to proceed with monopoly charges against Apple in violation of theSherman Antitrust Act.[9] On June 7, 2006, theNorwegian Consumer Councilstated that Apple'siTunes Music Storeviolates Norwegian law. The contract conditions were vague and "clearly unbalanced to disfavor the customer".[10]The retroactive changes to the DRM conditions and the incompatibility with other music players are the major points of concern. In an earlier letter to Apple, consumer ombudsmanBjørn Erik Thoncomplained that iTunes' DRM mechanism was a lock-in to Apple's music players, and argued that this was a conflict with consumer rights that he doubted would be defendable by Norwegian copyright law.[11] As of 29 May 2007[update], tracks on theEMIlabel became available in a DRM-free format callediTunes Plus. These files are unprotected and are encoded in the AAC format at 256kilobits per second, twice the bitrate of standard tracks bought through the service. iTunes accounts can be set to display either standard or iTunes Plus formats for tracks where both formats exist.[12]These files can be used with any player that supports the AAC file format and are not locked to Apple hardware. They can be converted to MP format if desired.[clarification needed] As of January 6, 2009, all four big music studios (Warner Bros.,Sony BMG,Universal, andEMI) have signed up to remove the DRM from their tracks, at no extra cost. However, Apple charges consumers to have previously purchased DRM music restrictions removed.[13] AlthoughGooglehas stated its position in favor of interoperability,[14]the company has taken steps away from open protocols replacing open standard Google Talk by proprietary protocol Google Hangouts.[15][16]Also, Google'sData Liberation Fronthas been inactive on Twitter since 2013[17]and its official website, www.dataliberation.org, now redirects to a page on Google's FAQs, leading users to believe the project has been closed.[18][19]Google's mobile operating systemAndroidis open source; however, the operating system that comes with the phones that most people actually purchase in a store is more often than not shipped with many of Google's proprietary applications thatpromote users to use only Google services. Because cloud computing is still relatively new, standards are still being developed.[20]Many cloud platforms and services are proprietary, meaning that they are built on the specific standards, tools and protocols developed by a particular vendor for its particular cloud offering.[20]This can make migrating off a proprietary cloud platform prohibitively complicated and expensive.[20] Three types of vendor lock-in can occur with cloud computing:[21] Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor lock-in, and aligns with enterprise data centers that are operating hybrid cloud models.[22]The absence of vendor lock-in lets cloud administrators select their choice of hypervisors for specific tasks, or to deploy virtualized infrastructures to other enterprises without the need to consider the flavor of hypervisor in the other enterprise.[23] A heterogeneous cloud is considered one that includes on-premises private clouds, public clouds and software-as-a-service clouds. Heterogeneous clouds can work with environments that are not virtualized, such as traditional data centers.[24]Heterogeneous clouds also allow for the use of piece parts, such as hypervisors, servers, and storage, from multiple vendors.[25] Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each other.[26]The result is complicated migration between backends, and makes it difficult to integrate data spread across various locations.[26]This has been described as a problem of vendor lock-in.[26]The solution to this is for clouds to adopt common standards.[26]
https://en.wikipedia.org/wiki/Vendor_lock-in
Information privacyis the relationship between the collection and dissemination ofdata,technology, the publicexpectation of privacy,contextual information norms, and thelegalandpoliticalissues surrounding them.[1]It is also known asdata privacy[2]ordata protection. Various types ofpersonal informationoften come under privacy concerns. This describes the ability to control what information one reveals about oneself over cable television, and who can access that information. For example, third parties can trackIP TVprograms someone has watched at any given time. "The addition of any information in a broadcasting stream is not required for an audience rating survey, additional devices are not requested to be installed in the houses of viewers or listeners, and without the necessity of their cooperations, audience ratings can be automatically performed in real-time."[3] In the United Kingdom in 2012, the Education SecretaryMichael Govedescribed theNational Pupil Databaseas a "rich dataset" whose value could be "maximised" by making it more openly accessible, including to private companies. Kelly Fiveash ofThe Registersaid that this could mean "a child's school life including exam results, attendance, teacher assessments and even characteristics" could be available, with third-party organizations being responsible for anonymizing any publications themselves, rather than the data being anonymized by the government before being handed over. An example of a data request that Gove indicated had been rejected in the past, but might be possible under an improved version of privacy regulations, was for "analysis on sexual exploitation".[4] Information about a person's financial transactions, including the amount of assets, positions held in stocks or funds, outstanding debts, and purchases can be sensitive. If criminals gain access to information such as a person's accounts or credit card numbers, that person could become the victim offraudoridentity theft. Information about a person's purchases can reveal a great deal about that person's history, such as places they have visited, whom they have contact with, products they have used, their activities and habits, or medications they have used. In some cases, corporations may use this information totargetindividuals withmarketingcustomized towards those individual's personal preferences, which that person may or may not approve.[4] As heterogeneous information systems with differing privacy rules are interconnected and information is shared,policy applianceswill be required to reconcile, enforce, and monitor an increasing amount of privacy policy rules (and laws). There are two categories of technology to address privacy protection incommercialIT systems: communication and enforcement. Computer privacy can be improved throughindividualization. Currently security messages are designed for the "average user", i.e. the same message for everyone. Researchers have posited that individualized messages and security "nudges", crafted based on users' individual differences and personality traits, can be used for further improvements for each person's compliance with computer security and privacy.[5] Improve privacy through data encryption By converting data into a non-readable format, encryption prevents unauthorized access. At present, common encryption technologies include AES and RSA. Use data encryption so that only users with decryption keys can access the data.[6] The ability to control the information one reveals about oneself over the internet and who can access that information has become a growing concern. These concerns include whetheremailcan be stored or read by third parties without consent or whether third parties can continue to track the websites that someone visited. Another concern is whether websites one visits can collect, store, and possibly sharepersonally identifiable informationabout users. The advent of varioussearch enginesand the use ofdata miningcreated a capability for data about individuals to be collected and combined from a wide variety of sources very easily.[7][8][9]AI facilitated creating inferential information about individuals and groups based on such enormous amounts of collected data, transforming the information economy.[10]The FTC has provided a set of guidelines that represent widely accepted concepts concerning fair information practices in an electronic marketplace, called theFair Information Practice Principles. But these have been critiqued for their insufficiency in the context of AI-enabled inferential information.[10] On the internet many users give away a lot of information about themselves: unencrypted emails can be read by the administrators of ane-mail serverif the connection is not encrypted (noHTTPS), and also theinternet service providerand other partiessniffingthe network traffic of that connection are able to know the contents. The same applies to any kind of traffic generated on the Internet, includingweb browsing,instant messaging, and others. In order not to give away too much personal information, emails can be encrypted and browsing of webpages as well as other online activities can be done anonymously viaanonymizers, or by open source distributed anonymizers, so-calledmix networks.Nym[11]andI2P[12]are examples of well-knownmix nets. Email is not the only internet content with privacy concerns. In an age where increasing amounts of information are online, social networking sites pose additional privacy challenges. People may be tagged in photos or have valuable information exposed about themselves either by choice or unexpectedly by others, referred to asparticipatory surveillance. Data about location can also be accidentally published, for example, when someone posts a picture with a store as a background. Caution should be exercised when posting information online. Social networks vary in what they allow users to make private and what remains publicly accessible.[13]Without strong security settings in place and careful attention to what remains public, a person can be profiled by searching for and collecting disparate pieces of information, leading to cases ofcyberstalking[14]or reputation damage.[15] Cookies are used on websites so that users may allow the website to retrieve some information from the user's internet, but they usually do not mention what the data being retrieved is.[16]In 2018, the General Data Protection Regulation (GDPR) passed a regulation that forces websites to visibly disclose to consumers their information privacy practices, referred to as cookie notices.[16]This was issued to give consumers the choice of what information about their behavior they consent to letting websites track; however, its effectiveness is controversial.[16]Some websites may engage in deceptive practices such as placing cookie notices in places on the page that are not visible or only giving consumers notice that their information is being tracked but not allowing them to change their privacy settings.[16]Apps like Instagram and Facebook collect user data for a personalized app experience; however, they track user activity on other apps, which jeopardizes users' privacy and data. By controlling how visible these cookie notices are, companies can discreetly collect data, giving them more power over consumers.[16] As location tracking capabilities of mobile devices are advancing (location-based services), problems related to user privacy arise.Location datais among the most sensitive data currently being collected.[17]A list of potentially sensitive professional and personal information that could be inferred about an individual knowing only their mobility trace was published in 2009 by theElectronic Frontier Foundation.[18]These include the movements of a competitor sales force, attendance of a particular church or an individual's presence in a motel, or at an abortion clinic. A recent MIT study[19][20]by de Montjoye et al. showed that four spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5 million people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity to the person. People may not wish for their medical records to be revealed to others due to the confidentiality and sensitivity of what the information could reveal about their health. For example, they might be concerned that it might affect their insurance coverage or employment. Or, it may be because they would not wish for others to know about any medical or psychological conditions or treatments that would bring embarrassment upon themselves. Revealing medical data could also reveal other details about one's personal life.[21]There are three major categories of medical privacy: informational (the degree of control over personal information), physical (the degree of physical inaccessibility to others), and psychological (the extent to which the doctor respects patients' cultural beliefs, inner thoughts, values, feelings, and religious practices and allows them to make personal decisions).[22]Physicians and psychiatrists in many cultures and countries have standards fordoctor–patient relationships, which include maintaining confidentiality. In some cases, thephysician–patient privilegeis legally protected. These practices are in place to protect the dignity of patients, and to ensure that patients feel free to reveal complete and accurate information required for them to receive the correct treatment.[23]To view the United States' laws on governing privacy of private health information, seeHIPAAand theHITECH Act. The Australian law is the Privacy Act 1988 Australia as well as state-based health records legislation. Political privacyhas been a concern sincevoting systemsemerged in ancient times. Thesecret ballotis the simplest and most widespread measure to ensure that political views are not known to anyone other than the voters themselves—it is nearly universal in moderndemocracyand considered to be a basic right ofcitizenship. In fact, even where other rights ofprivacydo not exist, this type of privacy very often does. There are several forms of voting fraud or privacy violations possible with the use of digital voting machines.[24] The legal protection of the right toprivacyin general – and of data privacy in particular – varies greatly around the world.[25] Laws and regulations related to Privacy and Data Protection are constantly changing, it is seen as important to keep abreast of any changes in the law and to continually reassess compliance with data privacy and security regulations.[26]Within academia,Institutional Review Boardsfunction to assure that adequate measures are taken to ensure both the privacy and confidentiality of human subjects in research.[27] Privacyconcerns exist whereverpersonally identifiable informationor othersensitive informationis collected, stored, used, and finally destroyed or deleted – indigital formor otherwise. Improper or non-existent disclosure control can be the root cause for privacy issues.Informed consentmechanisms includingdynamic consentare important in communicating to data subjects the different uses of their personally identifiable information. Data privacy issues may arise in response to information from a wide range of sources, such as:[28] Data protection laws across the globe aim to secure personal information and safeguard individual privacy in a digital era. The European Union’s General Data Protection Regulation (GDPR) sets a high benchmark, emphasizing consent, transparency, and robust accountability by imposing strict penalties. Many countries adopt similar principles, mandating that organizations implement effective security measures, respect user rights, and notify breaches. In regions such as North America, Asia, and Oceania, data protection frameworks vary from sector-specific regulations to comprehensive legislation. Globally, these laws balance innovation with privacy, ensuring that personal data is appropriately accessible, managed ethically while mitigating misuse and cyber threats. TheUnited States Department of Commercecreated theInternational Safe Harbor Privacy Principlescertification program in response to the1995 Directive on Data Protection(Directive 95/46/EC) of the European Commission.[29]Both the United States and the European Union officially state that they are committed to upholding information privacy of individuals, but the former has caused friction between the two by failing to meet the standards of the EU's stricter laws on personal data. The negotiation of the Safe Harbor program was, in part, to address this long-running issue.[30]Directive 95/46/EC declares in Chapter IV Article 25 that personal data may only be transferred from the countries in theEuropean Economic Areato countries which provideadequateprivacy protection. Historically, establishing adequacy required the creation of national laws broadly equivalent to those implemented by Directive 95/46/EU. Although there are exceptions to this blanket prohibition – for example where the disclosure to a country outside the EEA is made with the consent of the relevant individual (Article 26(1)(a)) – they are limited in practical scope. As a result, Article 25 created a legal risk to organizations which transfer personal data from Europe to the United States. The program regulates the exchange ofpassenger name recordinformation between the EU and the US. According to the EU directive, personal data may only be transferred to third countries if that country provides an adequate level of protection. Some exceptions to this rule are provided, for instance when the controller themself can guarantee that the recipient will comply with the data protection rules. TheEuropean Commissionhas set up the "Working party on the Protection of Individuals with regard to the Processing of Personal Data," commonly known as the "Article 29 Working Party". The Working Party gives advice about the level of protection in theEuropean Unionand third countries.[31] The Working Party negotiated with U.S. representatives about the protection of personal data, theSafe Harbor Principleswere the result. Notwithstanding that approval, the self-assessment approach of the Safe Harbor remains controversial with a number of European privacy regulators and commentators.[32] The Safe Harbor program addresses this issue in the following way: rather than a blanket law imposed on all organizations in theUnited States, a voluntary program is enforced by theFederal Trade Commission. U.S. organizations which register with this program, having self-assessed their compliance with a number of standards, are "deemed adequate" for the purposes of Article 25. Personal information can be sent to such organizations from the EEA without the sender being in breach of Article 25 or its EU national equivalents. The Safe Harbor was approved as providing adequate protection for personal data, for the purposes of Article 25(6), by the European Commission on 26 July 2000.[33] Under the Safe Harbor, adoptee organizations need to carefully consider their compliance with theonward transfer obligations, wherepersonal dataoriginating in the EU is transferred to the US Safe Harbor, and then onward to a third country. The alternative compliance approach of "binding corporate rules", recommended by many EU privacy regulators, resolves this issue. In addition, any dispute arising in relation to the transfer of HR data to the US Safe Harbor must be heard by a panel of EU privacy regulators.[34] In July 2007, a new, controversial,[35]Passenger Name Recordagreement between the US and the EU was made.[36]A short time afterwards, theBush administrationgave exemption for theDepartment of Homeland Security, for theArrival and Departure Information System(ADIS) and for theAutomated Target Systemfrom the1974 Privacy Act.[37] In February 2008,Jonathan Faull, the head of the EU's Commission of Home Affairs, complained about the US bilateral policy concerning PNR.[38]The US had signed in February 2008 a memorandum of understanding (MOU) with theCzech Republicin exchange of a visa waiver scheme, without concerting before with Brussels.[35]The tensions between Washington and Brussels are mainly caused by a lesser level of data protection in the US, especially since foreigners do not benefit from the USPrivacy Act of 1974. Other countries approached for bilateral MOU included the United Kingdom, Estonia, Germany and Greece.[39]
https://en.wikipedia.org/wiki/Data_privacy
Incomputer architecture, atrace cacheorexecution trace cacheis a specializedinstruction cachewhich stores the dynamic stream ofinstructionsknown astrace. It helps in increasing the instruction fetchbandwidthand decreasing power consumption (in the case ofIntelPentium 4) by storing traces of instructions that have already been fetched and decoded.[1]Atrace processor[2]is an architecture designed around the trace cache and processes the instructions at trace level granularity. The formal mathematical theory of traces is described bytrace monoids. The earliest academic publication of trace cache was "Trace Cache: a Low Latency Approach to High Bandwidth Instruction Fetching".[1]This widely acknowledged paper was presented by Eric Rotenberg, Steve Bennett, and Jim Smith at 1996International Symposium on Microarchitecture(MICRO) conference. An earlier publication is US patent 5381533,[3]by Alex Peleg and Uri Weiser of Intel, "Dynamic flow instruction cache memory organized around trace segments independent of virtual address line", a continuation of an application filed in 1992, later abandoned. Widersuperscalar processorsdemand multiple instructions to be fetched in a single cycle for higher performance. Instructions to be fetched are not always in contiguous memory locations (basic blocks) because ofbranchandjumpinstructions. So processors need additional logic and hardware support to fetch and align such instructions from non-contiguous basic blocks. If multiple branches are predicted asnot-taken, then processors can fetch instructions from multiple contiguous basic blocks in a single cycle. However, if any of the branches is predicted astaken, then processor should fetch instructions from the taken path in that same cycle. This limits the fetch capability of a processor. Consider these four basic blocks (A,B,C,D) as shown in the figure that correspond to a simpleif-elseloop. These blocks will be storedcontiguouslyasABCDin the memory. If the branchDis predictednot-taken,the fetch unit can fetch the basic blocksA,B,Cwhich are placed contiguously. However, ifDis predictedtaken, the fetch unit has to fetchA,B,Dwhich are non-contiguously placed. Hence, fetching these blocks which are non contiguously placed, in a single cycle will be very difficult. So, in situations like these, the trace cache comes in aid to the processor. Once fetched, the trace cache stores the instructions in their dynamic sequence. When these instructions are encountered again, the trace cache allows the instruction fetch unit of a processor to fetch several basic blocks from it without having to worry about branches in the execution flow. Instructions will be stored in the trace cache either after they have been decoded, or as they are retired. However, instruction sequence is speculative if they are stored just after decode stage. A trace, also called a dynamic instruction sequence, is an entry in the trace cache. It can be characterized bymaximum number of instructionsandmaximum basic blocks. Traces can start at any dynamic instruction. Multiple traces can have same starting instruction i.e., same startingprogram counter(PC) and instructions from different basic blocks as per the branch outcomes. For the figure above, ABC and ABD are valid traces. They both start at the same PC (address of A) and have different basic blocks as per D's prediction. Traces usually terminate when one of the following occurs: A single trace will have following information: Following are the factors that need to be considered while designing a trace cache. A trace cache is not on the critical path of instruction fetch[4] Trace lines are stored in the trace cache based on the PC of the first instruction in the trace and a set of branch predictions. This allows for storing different trace paths that start on the same address, each representing different branch outcomes. This method of tagging helps to provide path associativity to the trace cache. Other method can include having only starting PC as tag in trace cache. In the instruction fetch stage of apipeline, the current PC along with a set of branch predictions is checked in the trace cache for ahit. If there is a hit, a trace line is supplied to fetch unit which does not have to go to a regular cache or to memory for these instructions. The trace cache continues to feed the fetch unit until the trace line ends or until there is amispredictionin the pipeline. If there is a miss, a new trace starts to be built. The Pentium 4's execution trace cache storesmicro-operationsresulting from decodingx86 instructions, providing also the functionality of a micro-operation cache. Having this, the next time an instruction is needed, it does not have to be decoded into micro-ops again.[5] The disadvantages of trace cache are: Within the L1 cache of theNetBurstCPUs, Intel incorporated its execution trace cache.[7][8]It stores decodedmicro-operations, so that when executing a new instruction, instead of fetching and decoding the instruction again, the CPU directly accesses the decoded micro-ops from the trace cache, thereby saving considerable time. Moreover, the micro-ops are cached in their predicted path of execution, which means that when instructions are fetched by the CPU from the cache, they are already present in the correct order of execution. Intel later introduced a similar but simpler concept withSandy Bridgecalledmicro-operation cache(UOP cache).
https://en.wikipedia.org/wiki/Trace_cache
Infunctional analysis, theHahn–Banach theoremis a central result that allows the extension ofbounded linear functionalsdefined on avector subspaceof somevector spaceto the whole space. The theorem also shows that there are sufficientcontinuouslinear functionals defined on everynormed vector spacein order to study thedual space. Another version of the Hahn–Banach theorem is known as theHahn–Banach separation theoremor thehyperplane separation theorem, and has numerous uses inconvex geometry. The theorem is named for the mathematiciansHans HahnandStefan Banach, who proved it independently in the late 1920s. The special case of the theorem for the spaceC[a,b]{\displaystyle C[a,b]}of continuous functions on an interval was proved earlier (in 1912) byEduard Helly,[1]and a more general extension theorem, theM. Riesz extension theorem, from which the Hahn–Banach theorem can be derived, was proved in 1923 byMarcel Riesz.[2] The first Hahn–Banach theorem was proved byEduard Hellyin 1912 who showed that certain linear functionals defined on a subspace of a certain type of normed space (CN{\displaystyle \mathbb {C} ^{\mathbb {N} }}) had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensionalextensionexists (where the linear functional has its domain extended by one dimension) and then usinginduction. In 1927, Hahn defined generalBanach spacesand used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that usessublinear functions. Whereas Helly's proof used mathematical induction, Hahn and Banach both usedtransfinite induction.[3] The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as themoment problem, whereby given all the potentialmoments of a functionone must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is theFourier cosine seriesproblem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so. Riesz and Helly solved the problem for certain classes of spaces (such asLp([0,1]){\displaystyle L^{p}([0,1])}andC([a,b]){\displaystyle C([a,b])}) where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem:[3] IfX{\displaystyle X}happens to be areflexive spacethen to solve the vector problem, it suffices to solve the following dual problem:[3] Riesz went on to defineLp([0,1]){\displaystyle L^{p}([0,1])}space(1<p<∞{\displaystyle 1<p<\infty }) in 1910 and theℓp{\displaystyle \ell ^{p}}spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution.[3] Theorem[3](The functional problem)—Let(xi)i∈I{\displaystyle \left(x_{i}\right)_{i\in I}}be vectors in arealorcomplexnormed spaceX{\displaystyle X}and let(ci)i∈I{\displaystyle \left(c_{i}\right)_{i\in I}}be scalars alsoindexed byI≠∅.{\displaystyle I\neq \varnothing .} There exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(xi)=ci{\displaystyle f\left(x_{i}\right)=c_{i}}for alli∈I{\displaystyle i\in I}if and only if there exists aK>0{\displaystyle K>0}such that for any choice of scalars(si)i∈I{\displaystyle \left(s_{i}\right)_{i\in I}}where all but finitely manysi{\displaystyle s_{i}}are0,{\displaystyle 0,}the following holds:|∑i∈Isici|≤K‖∑i∈Isixi‖.{\displaystyle \left|\sum _{i\in I}s_{i}c_{i}\right|\leq K\left\|\sum _{i\in I}s_{i}x_{i}\right\|.} The Hahn–Banach theorem can be deduced from the above theorem.[3]IfX{\displaystyle X}isreflexivethen this theorem solves the vector problem. A real-valued functionf:M→R{\displaystyle f:M\to \mathbb {R} }defined on a subsetM{\displaystyle M}ofX{\displaystyle X}is said to bedominated (above) bya functionp:X→R{\displaystyle p:X\to \mathbb {R} }iff(m)≤p(m){\displaystyle f(m)\leq p(m)}for everym∈M.{\displaystyle m\in M.}For this reason, the following version of the Hahn–Banach theorem is calledthe dominatedextensiontheorem. Hahn–Banach dominated extension theorem(for real linear functionals)[4][5][6]—Ifp:X→R{\displaystyle p:X\to \mathbb {R} }is asublinear function(such as anormorseminormfor example) defined on a real vector spaceX{\displaystyle X}then anylinear functionaldefined on a vector subspace ofX{\displaystyle X}that isdominated abovebyp{\displaystyle p}has at least onelinear extensionto all ofX{\displaystyle X}that is also dominated above byp.{\displaystyle p.} Explicitly, ifp:X→R{\displaystyle p:X\to \mathbb {R} }is asublinear function, which by definition means that it satisfiesp(x+y)≤p(x)+p(y)andp(tx)=tp(x)for allx,y∈Xand all realt≥0,{\displaystyle p(x+y)\leq p(x)+p(y)\quad {\text{ and }}\quad p(tx)=tp(x)\qquad {\text{ for all }}\;x,y\in X\;{\text{ and all real }}\;t\geq 0,}and iff:M→R{\displaystyle f:M\to \mathbb {R} }is a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such thatf(m)≤p(m)for allm∈M{\displaystyle f(m)\leq p(m)\quad {\text{ for all }}m\in M}then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }such thatF(m)=f(m)for allm∈M,{\displaystyle F(m)=f(m)\quad {\text{ for all }}m\in M,}F(x)≤p(x)for allx∈X.{\displaystyle F(x)\leq p(x)\quad ~\;\,{\text{ for all }}x\in X.}Moreover, ifp{\displaystyle p}is aseminormthen|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}necessarily holds for allx∈X.{\displaystyle x\in X.} The theorem remains true if the requirements onp{\displaystyle p}are relaxed to require only thatp{\displaystyle p}be aconvex function:[7][8]p(tx+(1−t)y)≤tp(x)+(1−t)p(y)for all0<t<1andx,y∈X.{\displaystyle p(tx+(1-t)y)\leq tp(x)+(1-t)p(y)\qquad {\text{ for all }}0<t<1{\text{ and }}x,y\in X.}A functionp:X→R{\displaystyle p:X\to \mathbb {R} }is convex and satisfiesp(0)≤0{\displaystyle p(0)\leq 0}if and only ifp(ax+by)≤ap(x)+bp(y){\displaystyle p(ax+by)\leq ap(x)+bp(y)}for all vectorsx,y∈X{\displaystyle x,y\in X}and all non-negative reala,b≥0{\displaystyle a,b\geq 0}such thata+b≤1.{\displaystyle a+b\leq 1.}Everysublinear functionis a convex function. On the other hand, ifp:X→R{\displaystyle p:X\to \mathbb {R} }is convex withp(0)≥0,{\displaystyle p(0)\geq 0,}then the function defined byp0(x)=definft>0p(tx)t{\displaystyle p_{0}(x)\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\inf _{t>0}{\frac {p(tx)}{t}}}ispositively homogeneous(because for allx{\displaystyle x}andr>0{\displaystyle r>0}one hasp0(rx)=inft>0p(trx)t=rinft>0p(trx)tr=rinfτ>0p(τx)τ=rp0(x){\displaystyle p_{0}(rx)=\inf _{t>0}{\frac {p(trx)}{t}}=r\inf _{t>0}{\frac {p(trx)}{tr}}=r\inf _{\tau >0}{\frac {p(\tau x)}{\tau }}=rp_{0}(x)}), hence, being convex,it is sublinear. It is also bounded above byp0≤p,{\displaystyle p_{0}\leq p,}and satisfiesF≤p0{\displaystyle F\leq p_{0}}for every linear functionalF≤p.{\displaystyle F\leq p.}So the extension of the Hahn–Banach theorem to convex functionals does not have a much larger content than the classical one stated for sublinear functionals. IfF:X→R{\displaystyle F:X\to \mathbb {R} }is linear thenF≤p{\displaystyle F\leq p}if and only if[4]−p(−x)≤F(x)≤p(x)for allx∈X,{\displaystyle -p(-x)\leq F(x)\leq p(x)\quad {\text{ for all }}x\in X,}which is the (equivalent) conclusion that some authors[4]write instead ofF≤p.{\displaystyle F\leq p.}It follows that ifp:X→R{\displaystyle p:X\to \mathbb {R} }is alsosymmetric, meaning thatp(−x)=p(x){\displaystyle p(-x)=p(x)}holds for allx∈X,{\displaystyle x\in X,}thenF≤p{\displaystyle F\leq p}if and only|F|≤p.{\displaystyle |F|\leq p.}Everynormis aseminormand both are symmetricbalancedsublinear functions. A sublinear function is a seminorm if and only if it is abalanced function. On a real vector space (although not on a complex vector space), a sublinear function is a seminorm if and only if it is symmetric. Theidentity functionR→R{\displaystyle \mathbb {R} \to \mathbb {R} }onX:=R{\displaystyle X:=\mathbb {R} }is an example of a sublinear function that is not a seminorm. The dominated extension theorem for real linear functionals implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces. Hahn–Banach theorem[3][9]—Supposep:X→R{\displaystyle p:X\to \mathbb {R} }aseminormon a vector spaceX{\displaystyle X}over the fieldK,{\displaystyle \mathbf {K} ,}which is eitherR{\displaystyle \mathbb {R} }orC.{\displaystyle \mathbb {C} .}Iff:M→K{\displaystyle f:M\to \mathbf {K} }is a linear functional on a vector subspaceM{\displaystyle M}such that|f(m)|≤p(m)for allm∈M,{\displaystyle |f(m)|\leq p(m)\quad {\text{ for all }}m\in M,}then there exists a linear functionalF:X→K{\displaystyle F:X\to \mathbf {K} }such thatF(m)=f(m)for allm∈M,{\displaystyle F(m)=f(m)\quad \;{\text{ for all }}m\in M,}|F(x)|≤p(x)for allx∈X.{\displaystyle |F(x)|\leq p(x)\quad \;\,{\text{ for all }}x\in X.} The theorem remains true if the requirements onp{\displaystyle p}are relaxed to require only that for allx,y∈X{\displaystyle x,y\in X}and all scalarsa{\displaystyle a}andb{\displaystyle b}satisfying|a|+|b|≤1,{\displaystyle |a|+|b|\leq 1,}[8]p(ax+by)≤|a|p(x)+|b|p(y).{\displaystyle p(ax+by)\leq |a|p(x)+|b|p(y).}This condition holds if and only ifp{\displaystyle p}is aconvexandbalanced functionsatisfyingp(0)≤0,{\displaystyle p(0)\leq 0,}or equivalently, if and only if it is convex, satisfiesp(0)≤0,{\displaystyle p(0)\leq 0,}andp(ux)≤p(x){\displaystyle p(ux)\leq p(x)}for allx∈X{\displaystyle x\in X}and allunit lengthscalarsu.{\displaystyle u.} A complex-valued functionalF{\displaystyle F}is said to bedominated byp{\displaystyle p}if|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}for allx{\displaystyle x}in the domain ofF.{\displaystyle F.}With this terminology, the above statements of the Hahn–Banach theorem can be restated more succinctly: Proof The following observations allow theHahn–Banach theorem for real vector spacesto be applied to (complex-valued) linear functionals on complex vector spaces. Every linear functionalF:X→C{\displaystyle F:X\to \mathbb {C} }on a complex vector space iscompletely determinedby itsreal partRe⁡F:X→R{\displaystyle \;\operatorname {Re} F:X\to \mathbb {R} \;}through the formula[6][proof 1]F(x)=Re⁡F(x)−iRe⁡F(ix)for allx∈X{\displaystyle F(x)\;=\;\operatorname {Re} F(x)-i\operatorname {Re} F(ix)\qquad {\text{ for all }}x\in X}and moreover, if‖⋅‖{\displaystyle \|\cdot \|}is anormonX{\displaystyle X}then theirdual normsare equal:‖F‖=‖Re⁡F‖.{\displaystyle \|F\|=\|\operatorname {Re} F\|.}[10]In particular, a linear functional onX{\displaystyle X}extends another one defined onM⊆X{\displaystyle M\subseteq X}if and only if their real parts are equal onM{\displaystyle M}(in other words, a linear functionalF{\displaystyle F}extendsf{\displaystyle f}if and only ifRe⁡F{\displaystyle \operatorname {Re} F}extendsRe⁡f{\displaystyle \operatorname {Re} f}). The real part of a linear functional onX{\displaystyle X}is always areal-linear functional(meaning that it is linear whenX{\displaystyle X}is considered as a real vector space) and ifR:X→R{\displaystyle R:X\to \mathbb {R} }is a real-linear functional on a complex vector space thenx↦R(x)−iR(ix){\displaystyle x\mapsto R(x)-iR(ix)}defines the unique linear functional onX{\displaystyle X}whose real part isR.{\displaystyle R.} IfF{\displaystyle F}is a linear functional on a (complex or real) vector spaceX{\displaystyle X}and ifp:X→R{\displaystyle p:X\to \mathbb {R} }is a seminorm then[6][proof 2]|F|≤pif and only ifRe⁡F≤p.{\displaystyle |F|\,\leq \,p\quad {\text{ if and only if }}\quad \operatorname {Re} F\,\leq \,p.}Stated in simpler language, a linear functional isdominatedby a seminormp{\displaystyle p}if and only if itsreal part is dominated abovebyp.{\displaystyle p.} Supposep:X→R{\displaystyle p:X\to \mathbb {R} }is a seminorm on a complex vector spaceX{\displaystyle X}and letf:M→C{\displaystyle f:M\to \mathbb {C} }be a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}that satisfies|f|≤p{\displaystyle |f|\leq p}onM.{\displaystyle M.}ConsiderX{\displaystyle X}as a real vector space and apply theHahn–Banach theorem for real vector spacesto thereal-linear functionalRe⁡f:M→R{\displaystyle \;\operatorname {Re} f:M\to \mathbb {R} \;}to obtain a real-linear extensionR:X→R{\displaystyle R:X\to \mathbb {R} }that is also dominated above byp,{\displaystyle p,}so that it satisfiesR≤p{\displaystyle R\leq p}onX{\displaystyle X}andR=Re⁡f{\displaystyle R=\operatorname {Re} f}onM.{\displaystyle M.}The mapF:X→C{\displaystyle F:X\to \mathbb {C} }defined byF(x)=R(x)−iR(ix){\displaystyle F(x)\;=\;R(x)-iR(ix)}is a linear functional onX{\displaystyle X}that extendsf{\displaystyle f}(because their real parts agree onM{\displaystyle M}) and satisfies|F|≤p{\displaystyle |F|\leq p}onX{\displaystyle X}(becauseRe⁡F≤p{\displaystyle \operatorname {Re} F\leq p}andp{\displaystyle p}is a seminorm).◼{\displaystyle \blacksquare } The proof above shows that whenp{\displaystyle p}is a seminorm then there is a one-to-one correspondence between dominated linear extensions off:M→C{\displaystyle f:M\to \mathbb {C} }and dominated real-linear extensions ofRe⁡f:M→R;{\displaystyle \operatorname {Re} f:M\to \mathbb {R} ;}the proof even gives a formula for explicitly constructing a linear extension off{\displaystyle f}from any given real-linear extension of its real part. Continuity A linear functionalF{\displaystyle F}on atopological vector spaceiscontinuousif and only if this is true of its real partRe⁡F;{\displaystyle \operatorname {Re} F;}if the domain is a normed space then‖F‖=‖Re⁡F‖{\displaystyle \|F\|=\|\operatorname {Re} F\|}(where one side is infinite if and only if the other side is infinite).[10]AssumeX{\displaystyle X}is atopological vector spaceandp:X→R{\displaystyle p:X\to \mathbb {R} }issublinear function. Ifp{\displaystyle p}is acontinuoussublinear function that dominates a linear functionalF{\displaystyle F}thenF{\displaystyle F}is necessarily continuous.[6]Moreover, a linear functionalF{\displaystyle F}is continuous if and only if itsabsolute value|F|{\displaystyle |F|}(which is aseminormthat dominatesF{\displaystyle F}) is continuous.[6]In particular, a linear functional is continuous if and only if it is dominated by some continuous sublinear function. TheHahn–Banach theorem for real vector spacesultimately follows from Helly's initial result for the special case where the linear functional is extended fromM{\displaystyle M}to a larger vector space in whichM{\displaystyle M}hascodimension1.{\displaystyle 1.}[3] Lemma[6](One–dimensional dominated extension theorem)—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be asublinear functionon a real vector spaceX,{\displaystyle X,}letf:M→R{\displaystyle f:M\to \mathbb {R} }alinear functionalon apropervector subspaceM⊊X{\displaystyle M\subsetneq X}such thatf≤p{\displaystyle f\leq p}onM{\displaystyle M}(meaningf(m)≤p(m){\displaystyle f(m)\leq p(m)}for allm∈M{\displaystyle m\in M}), and letx∈X{\displaystyle x\in X}be a vectornotinM{\displaystyle M}(soM⊕Rx=span⁡{M,x}{\displaystyle M\oplus \mathbb {R} x=\operatorname {span} \{M,x\}}). There exists a linear extensionF:M⊕Rx→R{\displaystyle F:M\oplus \mathbb {R} x\to \mathbb {R} }off{\displaystyle f}such thatF≤p{\displaystyle F\leq p}onM⊕Rx.{\displaystyle M\oplus \mathbb {R} x.} Given any real numberb,{\displaystyle b,}the mapFb:M⊕Rx→R{\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} }defined byFb(m+rx)=f(m)+rb{\displaystyle F_{b}(m+rx)=f(m)+rb}is always a linear extension off{\displaystyle f}toM⊕Rx{\displaystyle M\oplus \mathbb {R} x}[note 1]but it might not satisfyFb≤p.{\displaystyle F_{b}\leq p.}It will be shown thatb{\displaystyle b}can always be chosen so as to guarantee thatFb≤p,{\displaystyle F_{b}\leq p,}which will complete the proof. Ifm,n∈M{\displaystyle m,n\in M}thenf(m)−f(n)=f(m−n)≤p(m−n)=p(m+x−x−n)≤p(m+x)+p(−x−n){\displaystyle f(m)-f(n)=f(m-n)\leq p(m-n)=p(m+x-x-n)\leq p(m+x)+p(-x-n)}which implies−p(−n−x)−f(n)≤p(m+x)−f(m).{\displaystyle -p(-n-x)-f(n)~\leq ~p(m+x)-f(m).}So definea=supn∈M[−p(−n−x)−f(n)]andc=infm∈M[p(m+x)−f(m)]{\displaystyle a=\sup _{n\in M}[-p(-n-x)-f(n)]\qquad {\text{ and }}\qquad c=\inf _{m\in M}[p(m+x)-f(m)]}wherea≤c{\displaystyle a\leq c}are real numbers. To guaranteeFb≤p,{\displaystyle F_{b}\leq p,}it suffices thata≤b≤c{\displaystyle a\leq b\leq c}(in fact, this is also necessary[note 2]) because thenb{\displaystyle b}satisfies "the decisive inequality"[6]−p(−n−x)−f(n)≤b≤p(m+x)−f(m)for allm,n∈M.{\displaystyle -p(-n-x)-f(n)~\leq ~b~\leq ~p(m+x)-f(m)\qquad {\text{ for all }}\;m,n\in M.} To see thatf(m)+rb≤p(m+rx){\displaystyle f(m)+rb\leq p(m+rx)}follows,[note 3]assumer≠0{\displaystyle r\neq 0}and substitute1rm{\displaystyle {\tfrac {1}{r}}m}in for bothm{\displaystyle m}andn{\displaystyle n}to obtain−p(−1rm−x)−1rf(m)≤b≤p(1rm+x)−1rf(m).{\displaystyle -p\left(-{\tfrac {1}{r}}m-x\right)-{\tfrac {1}{r}}f\left(m\right)~\leq ~b~\leq ~p\left({\tfrac {1}{r}}m+x\right)-{\tfrac {1}{r}}f\left(m\right).}Ifr>0{\displaystyle r>0}(respectively, ifr<0{\displaystyle r<0}) then the right (respectively, the left) hand side equals1r[p(m+rx)−f(m)]{\displaystyle {\tfrac {1}{r}}\left[p(m+rx)-f(m)\right]}so that multiplying byr{\displaystyle r}givesrb≤p(m+rx)−f(m).{\displaystyle rb\leq p(m+rx)-f(m).}◼{\displaystyle \blacksquare } This lemma remains true ifp:X→R{\displaystyle p:X\to \mathbb {R} }is merely aconvex functioninstead of a sublinear function.[7][8] Assume thatp{\displaystyle p}is convex, which means thatp(ty+(1−t)z)≤tp(y)+(1−t)p(z){\displaystyle p(ty+(1-t)z)\leq tp(y)+(1-t)p(z)}for all0≤t≤1{\displaystyle 0\leq t\leq 1}andy,z∈X.{\displaystyle y,z\in X.}LetM,{\displaystyle M,}f:M→R,{\displaystyle f:M\to \mathbb {R} ,}andx∈X∖M{\displaystyle x\in X\setminus M}be as inthe lemma's statement. Given anym,n∈M{\displaystyle m,n\in M}and any positive realr,s>0,{\displaystyle r,s>0,}the positive real numberst:=sr+s{\displaystyle t:={\tfrac {s}{r+s}}}andrr+s=1−t{\displaystyle {\tfrac {r}{r+s}}=1-t}sum to1{\displaystyle 1}so that the convexity ofp{\displaystyle p}onX{\displaystyle X}guaranteesp(sr+sm+rr+sn)=p(sr+s(m−rx)+rr+s(n+sx))≤sr+sp(m−rx)+rr+sp(n+sx){\displaystyle {\begin{alignedat}{9}p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)~&=~p{\big (}{\tfrac {s}{r+s}}(m-rx)&&+{\tfrac {r}{r+s}}(n+sx){\big )}&&\\&\leq ~{\tfrac {s}{r+s}}\;p(m-rx)&&+{\tfrac {r}{r+s}}\;p(n+sx)&&\\\end{alignedat}}}and hencesf(m)+rf(n)=(r+s)f(sr+sm+rr+sn)by linearity off≤(r+s)p(sr+sm+rr+sn)f≤ponM≤sp(m−rx)+rp(n+sx){\displaystyle {\begin{alignedat}{9}sf(m)+rf(n)~&=~(r+s)\;f\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad {\text{ by linearity of }}f\\&\leq ~(r+s)\;p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad f\leq p{\text{ on }}M\\&\leq ~sp(m-rx)+rp(n+sx)\\\end{alignedat}}}thus proving that−sp(m−rx)+sf(m)≤rp(n+sx)−rf(n),{\displaystyle -sp(m-rx)+sf(m)~\leq ~rp(n+sx)-rf(n),}which after multiplying both sides by1rs{\displaystyle {\tfrac {1}{rs}}}becomes1r[−p(m−rx)+f(m)]≤1s[p(n+sx)−f(n)].{\displaystyle {\tfrac {1}{r}}[-p(m-rx)+f(m)]~\leq ~{\tfrac {1}{s}}[p(n+sx)-f(n)].}This implies that the values defined bya=supr>0m∈M1r[−p(m−rx)+f(m)]andc=infs>0n∈M1s[p(n+sx)−f(n)]{\displaystyle a=\sup _{\stackrel {m\in M}{r>0}}{\tfrac {1}{r}}[-p(m-rx)+f(m)]\qquad {\text{ and }}\qquad c=\inf _{\stackrel {n\in M}{s>0}}{\tfrac {1}{s}}[p(n+sx)-f(n)]}are real numbers that satisfya≤c.{\displaystyle a\leq c.}As in the above proof of theone–dimensional dominated extension theoremabove, for any realb∈R{\displaystyle b\in \mathbb {R} }defineFb:M⊕Rx→R{\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} }byFb(m+rx)=f(m)+rb.{\displaystyle F_{b}(m+rx)=f(m)+rb.}It can be verified that ifa≤b≤c{\displaystyle a\leq b\leq c}thenFb≤p{\displaystyle F_{b}\leq p}whererb≤p(m+rx)−f(m){\displaystyle rb\leq p(m+rx)-f(m)}follows fromb≤c{\displaystyle b\leq c}whenr>0{\displaystyle r>0}(respectively, follows froma≤b{\displaystyle a\leq b}whenr<0{\displaystyle r<0}).◼{\displaystyle \blacksquare } Thelemma aboveis the key step in deducing the dominated extension theorem fromZorn's lemma. The set of all possible dominated linear extensions off{\displaystyle f}are partially ordered by extension of each other, so there is a maximal extensionF.{\displaystyle F.}By the codimension-1 result, ifF{\displaystyle F}is not defined on all ofX,{\displaystyle X,}then it can be further extended. ThusF{\displaystyle F}must be defined everywhere, as claimed.◼{\displaystyle \blacksquare } WhenM{\displaystyle M}has countable codimension, then using induction and the lemma completes the proof of the Hahn–Banach theorem. The standard proof of the general case usesZorn's lemmaalthough the strictly weakerultrafilter lemma[11](which is equivalent to thecompactness theoremand to theBoolean prime ideal theorem) may be used instead. Hahn–Banach can also be proved usingTychonoff's theoremforcompactHausdorff spaces[12](which is also equivalent to the ultrafilter lemma) TheMizar projecthas completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file.[13] The Hahn–Banach theorem can be used to guarantee the existence ofcontinuous linear extensionsofcontinuous linear functionals. Hahn–Banach continuous extension theorem[14]—Every continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}of a (real or complex)locally convextopological vector spaceX{\displaystyle X}has a continuous linear extensionF{\displaystyle F}to all ofX.{\displaystyle X.}If in additionX{\displaystyle X}is anormed space, then this extension can be chosen so that itsdual normis equal to that off.{\displaystyle f.} Incategory-theoreticterms, the underlying field of the vector space is aninjective objectin the category of locally convex vector spaces. On anormed(orseminormed) space, a linear extensionF{\displaystyle F}of abounded linear functionalf{\displaystyle f}is said to benorm-preservingif it has the samedual normas the original functional:‖F‖=‖f‖.{\displaystyle \|F\|=\|f\|.}Because of this terminology, the second part ofthe above theoremis sometimes referred to as the "norm-preserving" version of the Hahn–Banach theorem.[15]Explicitly: Norm-preserving Hahn–Banach continuous extension theorem[15]—Every continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}of a (real or complex) normed spaceX{\displaystyle X}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}that satisfies‖f‖=‖F‖.{\displaystyle \|f\|=\|F\|.} The following observations allow thecontinuous extension theoremto be deduced from theHahn–Banach theorem.[16] The absolute value of a linear functional is always a seminorm. A linear functionalF{\displaystyle F}on atopological vector spaceX{\displaystyle X}is continuous if and only if its absolute value|F|{\displaystyle |F|}is continuous, which happens if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|F|≤p{\displaystyle |F|\leq p}on the domain ofF.{\displaystyle F.}[17]IfX{\displaystyle X}is a locally convex space then this statement remains true when the linear functionalF{\displaystyle F}is defined on apropervector subspace ofX.{\displaystyle X.} Letf{\displaystyle f}be a continuous linear functional defined on a vector subspaceM{\displaystyle M}of alocally convex topological vector spaceX.{\displaystyle X.}BecauseX{\displaystyle X}is locally convex, there exists a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }onX{\displaystyle X}thatdominatesf{\displaystyle f}(meaning that|f(m)|≤p(m){\displaystyle |f(m)|\leq p(m)}for allm∈M{\displaystyle m\in M}). By theHahn–Banach theorem, there exists a linear extension off{\displaystyle f}toX,{\displaystyle X,}call itF,{\displaystyle F,}that satisfies|F|≤p{\displaystyle |F|\leq p}onX.{\displaystyle X.}This linear functionalF{\displaystyle F}is continuous since|F|≤p{\displaystyle |F|\leq p}andp{\displaystyle p}is a continuous seminorm. Proof for normed spaces A linear functionalf{\displaystyle f}on anormed spaceiscontinuousif and only if it isbounded, which means that itsdual norm‖f‖=sup{|f(m)|:‖m‖≤1,m∈domain⁡f}{\displaystyle \|f\|=\sup\{|f(m)|:\|m\|\leq 1,m\in \operatorname {domain} f\}}is finite, in which case|f(m)|≤‖f‖‖m‖{\displaystyle |f(m)|\leq \|f\|\|m\|}holds for every pointm{\displaystyle m}in its domain. Moreover, ifc≥0{\displaystyle c\geq 0}is such that|f(m)|≤c‖m‖{\displaystyle |f(m)|\leq c\|m\|}for allm{\displaystyle m}in the functional's domain, then necessarily‖f‖≤c.{\displaystyle \|f\|\leq c.}IfF{\displaystyle F}is a linear extension of a linear functionalf{\displaystyle f}then their dual norms always satisfy‖f‖≤‖F‖{\displaystyle \|f\|\leq \|F\|}[proof 3]so that equality‖f‖=‖F‖{\displaystyle \|f\|=\|F\|}is equivalent to‖F‖≤‖f‖,{\displaystyle \|F\|\leq \|f\|,}which holds if and only if|F(x)|≤‖f‖‖x‖{\displaystyle |F(x)|\leq \|f\|\|x\|}for every pointx{\displaystyle x}in the extension's domain. This can be restated in terms of the function‖f‖‖⋅‖:X→R{\displaystyle \|f\|\,\|\cdot \|:X\to \mathbb {R} }defined byx↦‖f‖‖x‖,{\displaystyle x\mapsto \|f\|\,\|x\|,}which is always aseminorm:[note 4] Applying theHahn–Banach theoremtof{\displaystyle f}with this seminorm‖f‖‖⋅‖{\displaystyle \|f\|\,\|\cdot \|}thus produces a dominated linear extension whose norm is (necessarily) equal to that off,{\displaystyle f,}which proves the theorem: Letf{\displaystyle f}be a continuous linear functional defined on a vector subspaceM{\displaystyle M}of a normed spaceX.{\displaystyle X.}Then the functionp:X→R{\displaystyle p:X\to \mathbb {R} }defined byp(x)=‖f‖‖x‖{\displaystyle p(x)=\|f\|\,\|x\|}is a seminorm onX{\displaystyle X}thatdominatesf,{\displaystyle f,}meaning that|f(m)|≤p(m){\displaystyle |f(m)|\leq p(m)}holds for everym∈M.{\displaystyle m\in M.}By theHahn–Banach theorem, there exists a linear functionalF{\displaystyle F}onX{\displaystyle X}that extendsf{\displaystyle f}(which guarantees‖f‖≤‖F‖{\displaystyle \|f\|\leq \|F\|}) and that is also dominated byp,{\displaystyle p,}meaning that|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}for everyx∈X.{\displaystyle x\in X.}The fact that‖f‖{\displaystyle \|f\|}is a real number such that|F(x)|≤‖f‖‖x‖{\displaystyle |F(x)|\leq \|f\|\|x\|}for everyx∈X,{\displaystyle x\in X,}guarantees‖F‖≤‖f‖.{\displaystyle \|F\|\leq \|f\|.}Since‖F‖=‖f‖{\displaystyle \|F\|=\|f\|}is finite, the linear functionalF{\displaystyle F}is bounded and thus continuous. Thecontinuous extension theoremmight fail if thetopological vector space(TVS)X{\displaystyle X}is notlocally convex. For example, for0<p<1,{\displaystyle 0<p<1,}theLebesgue spaceLp([0,1]){\displaystyle L^{p}([0,1])}is acompletemetrizable TVS(anF-space) that isnotlocally convex (in fact, its only convex open subsets are itselfLp([0,1]){\displaystyle L^{p}([0,1])}and the empty set) and the only continuous linear functional onLp([0,1]){\displaystyle L^{p}([0,1])}is the constant0{\displaystyle 0}function (Rudin 1991, §1.47). SinceLp([0,1]){\displaystyle L^{p}([0,1])}is Hausdorff, every finite-dimensional vector subspaceM⊆Lp([0,1]){\displaystyle M\subseteq L^{p}([0,1])}islinearly homeomorphictoEuclidean spaceRdim⁡M{\displaystyle \mathbb {R} ^{\dim M}}orCdim⁡M{\displaystyle \mathbb {C} ^{\dim M}}(byF. Riesz's theorem) and so every non-zero linear functionalf{\displaystyle f}onM{\displaystyle M}is continuous but none has a continuous linear extension to all ofLp([0,1]).{\displaystyle L^{p}([0,1]).}However, it is possible for a TVSX{\displaystyle X}to not be locally convex but nevertheless have enough continuous linear functionals that itscontinuous dual spaceX∗{\displaystyle X^{*}}separates points; for such a TVS, a continuous linear functional defined on a vector subspacemighthave a continuous linear extension to the whole space. If theTVSX{\displaystyle X}is notlocally convexthen there might not exist any continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }defined onX{\displaystyle X}(not just onM{\displaystyle M}) that dominatesf,{\displaystyle f,}in which case the Hahn–Banach theorem can not be applied as it was inthe above proofof the continuous extension theorem. However, the proof's argument can be generalized to give a characterization of when a continuous linear functional has a continuous linear extension: IfX{\displaystyle X}is any TVS (not necessarily locally convex), then a continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}if and only if there exists some continuous seminormp{\displaystyle p}onX{\displaystyle X}thatdominatesf.{\displaystyle f.}Specifically, if given a continuous linear extensionF{\displaystyle F}thenp:=|F|{\displaystyle p:=|F|}is a continuous seminorm onX{\displaystyle X}that dominatesf;{\displaystyle f;}and conversely, if given a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }onX{\displaystyle X}that dominatesf{\displaystyle f}then any dominated linear extension off{\displaystyle f}toX{\displaystyle X}(the existence of which is guaranteed by the Hahn–Banach theorem) will be a continuous linear extension. The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets:{−p(−x−n)−f(n):n∈M},{\displaystyle \{-p(-x-n)-f(n):n\in M\},}and{p(m+x)−f(m):m∈M}.{\displaystyle \{p(m+x)-f(m):m\in M\}.}This sort of argument appears widely inconvex geometry,[18]optimization theory, andeconomics. Lemmas to this end derived from the original Hahn–Banach theorem are known as theHahn–Banach separation theorems.[19][20]They are generalizations of thehyperplane separation theorem, which states that two disjoint nonempty convex subsets of a finite-dimensional spaceRn{\displaystyle \mathbb {R} ^{n}}can be separated by someaffine hyperplane, which is afiber(level set) of the formf−1(s)={x:f(x)=s}{\displaystyle f^{-1}(s)=\{x:f(x)=s\}}wheref≠0{\displaystyle f\neq 0}is a non-zero linear functional ands{\displaystyle s}is a scalar. Theorem[19]—LetA{\displaystyle A}andB{\displaystyle B}be non-empty convex subsets of a reallocally convex topological vector spaceX.{\displaystyle X.}IfInt⁡A≠∅{\displaystyle \operatorname {Int} A\neq \varnothing }andB∩Int⁡A=∅{\displaystyle B\cap \operatorname {Int} A=\varnothing }then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatsupf(A)≤inff(B){\displaystyle \sup f(A)\leq \inf f(B)}andf(a)<inff(B){\displaystyle f(a)<\inf f(B)}for alla∈Int⁡A{\displaystyle a\in \operatorname {Int} A}(such anf{\displaystyle f}is necessarily non-zero). When the convex sets have additional properties, such as beingopenorcompactfor example, then the conclusion can be substantially strengthened: Theorem[3][21]—LetA{\displaystyle A}andB{\displaystyle B}be convex non-empty disjoint subsets of a realtopological vector spaceX.{\displaystyle X.} IfX{\displaystyle X}is complex (rather than real) then the same claims hold, but for thereal partoff.{\displaystyle f.} Then following important corollary is known as theGeometric Hahn–Banach theoremorMazur's theorem(also known asAscoli–Mazur theorem[22]). It follows from the first bullet above and the convexity ofM.{\displaystyle M.} Theorem (Mazur)[23]—LetM{\displaystyle M}be a vector subspace of the topological vector spaceX{\displaystyle X}and supposeK{\displaystyle K}is a non-empty convex open subset ofX{\displaystyle X}withK∩M=∅.{\displaystyle K\cap M=\varnothing .}Then there is a closedhyperplane(codimension-1 vector subspace)N⊆X{\displaystyle N\subseteq X}that containsM,{\displaystyle M,}but remains disjoint fromK.{\displaystyle K.} Mazur's theorem clarifies that vector subspaces (even those that are not closed) can be characterized by linear functionals. Corollary[24](Separation of a subspace and an open convex set)—LetM{\displaystyle M}be a vector subspace of alocally convex topological vector spaceX,{\displaystyle X,}andU{\displaystyle U}be a non-empty open convex subset disjoint fromM.{\displaystyle M.}Then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(m)=0{\displaystyle f(m)=0}for allm∈M{\displaystyle m\in M}andRe⁡f>0{\displaystyle \operatorname {Re} f>0}onU.{\displaystyle U.} Since points are triviallyconvex, geometric Hahn–Banach implies that functionals can detect theboundaryof a set. In particular, letX{\displaystyle X}be a real topological vector space andA⊆X{\displaystyle A\subseteq X}be convex withInt⁡A≠∅.{\displaystyle \operatorname {Int} A\neq \varnothing .}Ifa0∈A∖Int⁡A{\displaystyle a_{0}\in A\setminus \operatorname {Int} A}then there is a functional that is vanishing ata0,{\displaystyle a_{0},}but supported on the interior ofA.{\displaystyle A.}[19] Call a normed spaceX{\displaystyle X}smoothif at each pointx{\displaystyle x}in its unit ball there exists a unique closed hyperplane to the unit ball atx.{\displaystyle x.}Köthe showed in 1983 that a normed space is smooth at a pointx{\displaystyle x}if and only if the norm isGateaux differentiableat that point.[3] LetU{\displaystyle U}be a convexbalancedneighborhood of the origin in alocally convextopological vector spaceX{\displaystyle X}and supposex∈X{\displaystyle x\in X}is not an element ofU.{\displaystyle U.}Then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such that[3]sup|f(U)|≤|f(x)|.{\displaystyle \sup |f(U)|\leq |f(x)|.} The Hahn–Banach theorem is the first sign of an important philosophy infunctional analysis: to understand a space, one should understand itscontinuous functionals. For example, linear subspaces are characterized by functionals: ifXis a normed vector space with linear subspaceM(not necessarily closed) and ifz{\displaystyle z}is an element ofXnot in theclosureofM, then there exists a continuous linear mapf:X→K{\displaystyle f:X\to \mathbf {K} }withf(m)=0{\displaystyle f(m)=0}for allm∈M,{\displaystyle m\in M,}f(z)=1,{\displaystyle f(z)=1,}and‖f‖=dist⁡(z,M)−1.{\displaystyle \|f\|=\operatorname {dist} (z,M)^{-1}.}(To see this, note thatdist⁡(⋅,M){\displaystyle \operatorname {dist} (\cdot ,M)}is a sublinear function.) Moreover, ifz{\displaystyle z}is an element ofX, then there exists a continuous linear mapf:X→K{\displaystyle f:X\to \mathbf {K} }such thatf(z)=‖z‖{\displaystyle f(z)=\|z\|}and‖f‖≤1.{\displaystyle \|f\|\leq 1.}This implies that thenatural injectionJ{\displaystyle J}from a normed spaceXinto itsdouble dualV∗∗{\displaystyle V^{**}}is isometric. That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space isHausdorfforlocally convex. However, supposeXis a topological vector space, not necessarily Hausdorff orlocally convex, but with a nonempty, proper, convex, open setM. Then geometric Hahn–Banach implies that there is a hyperplane separatingMfrom any other point. In particular, there must exist a nonzero functional onX— that is, thecontinuous dual spaceX∗{\displaystyle X^{*}}is non-trivial.[3][25]ConsideringXwith theweak topologyinduced byX∗,{\displaystyle X^{*},}thenXbecomes locally convex; by the second bullet of geometric Hahn–Banach, the weak topology on this new spaceseparates points. ThusXwith this weak topology becomesHausdorff. This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces. The Hahn–Banach theorem is often useful when one wishes to apply the method ofa priori estimates. Suppose that we wish to solve the linear differential equationPu=f{\displaystyle Pu=f}foru,{\displaystyle u,}withf{\displaystyle f}given in some Banach spaceX. If we have control on the size ofu{\displaystyle u}in terms of‖f‖X{\displaystyle \|f\|_{X}}and we can think ofu{\displaystyle u}as a bounded linear functional on some suitable space of test functionsg,{\displaystyle g,}then we can viewf{\displaystyle f}as a linear functional by adjunction:(f,g)=(u,P∗g).{\displaystyle (f,g)=(u,P^{*}g).}At first, this functional is only defined on the image ofP,{\displaystyle P,}but using the Hahn–Banach theorem, we can try to extend it to the entire codomainX. The resulting functional is often defined to be aweak solution to the equation. Theorem[26]—A real Banach space isreflexiveif and only if every pair of non-empty disjoint closed convex subsets, one of which is bounded, can be strictly separated by a hyperplane. To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem. Proposition—SupposeX{\displaystyle X}is a Hausdorff locally convex TVS over the fieldK{\displaystyle \mathbf {K} }andY{\displaystyle Y}is a vector subspace ofX{\displaystyle X}that isTVS–isomorphictoKI{\displaystyle \mathbf {K} ^{I}}for some setI.{\displaystyle I.}ThenY{\displaystyle Y}is a closed andcomplementedvector subspace ofX.{\displaystyle X.} SinceKI{\displaystyle \mathbf {K} ^{I}}is a complete TVS so isY,{\displaystyle Y,}and since any complete subset of a Hausdorff TVS is closed,Y{\displaystyle Y}is a closed subset ofX.{\displaystyle X.}Letf=(fi)i∈I:Y→KI{\displaystyle f=\left(f_{i}\right)_{i\in I}:Y\to \mathbf {K} ^{I}}be a TVS isomorphism, so that eachfi:Y→K{\displaystyle f_{i}:Y\to \mathbf {K} }is a continuous surjective linear functional. By the Hahn–Banach theorem, we may extend eachfi{\displaystyle f_{i}}to a continuous linear functionalFi:X→K{\displaystyle F_{i}:X\to \mathbf {K} }onX.{\displaystyle X.}LetF:=(Fi)i∈I:X→KI{\displaystyle F:=\left(F_{i}\right)_{i\in I}:X\to \mathbf {K} ^{I}}soF{\displaystyle F}is a continuous linear surjection such that its restriction toY{\displaystyle Y}isF|Y=(Fi|Y)i∈I=(fi)i∈I=f.{\displaystyle F{\big \vert }_{Y}=\left(F_{i}{\big \vert }_{Y}\right)_{i\in I}=\left(f_{i}\right)_{i\in I}=f.}LetP:=f−1∘F:X→Y,{\displaystyle P:=f^{-1}\circ F:X\to Y,}which is a continuous linear map whose restriction toY{\displaystyle Y}isP|Y=f−1∘F|Y=f−1∘f=1Y,{\displaystyle P{\big \vert }_{Y}=f^{-1}\circ F{\big \vert }_{Y}=f^{-1}\circ f=\mathbf {1} _{Y},}where1Y{\displaystyle \mathbb {1} _{Y}}denotes theidentity maponY.{\displaystyle Y.}This shows thatP{\displaystyle P}is a continuouslinear projectionontoY{\displaystyle Y}(that is,P∘P=P{\displaystyle P\circ P=P}). ThusY{\displaystyle Y}is complemented inX{\displaystyle X}andX=Y⊕ker⁡P{\displaystyle X=Y\oplus \ker P}in the category of TVSs.◼{\displaystyle \blacksquare } The above result may be used to show that every closed vector subspace ofRN{\displaystyle \mathbb {R} ^{\mathbb {N} }}is complemented because any such space is either finite dimensional or else TVS–isomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.} General template There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows: Theorem[3]—IfD{\displaystyle D}is anabsorbingdiskin a real or complex vector spaceX{\displaystyle X}and iff{\displaystyle f}be a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such that|f|≤1{\displaystyle |f|\leq 1}onM∩D,{\displaystyle M\cap D,}then there exists a linear functionalF{\displaystyle F}onX{\displaystyle X}extendingf{\displaystyle f}such that|F|≤1{\displaystyle |F|\leq 1}onD.{\displaystyle D.} Hahn–Banach theorem for seminorms[27][28]—Ifp:M→R{\displaystyle p:M\to \mathbb {R} }is aseminormdefined on a vector subspaceM{\displaystyle M}ofX,{\displaystyle X,}and ifq:X→R{\displaystyle q:X\to \mathbb {R} }is a seminorm onX{\displaystyle X}such thatp≤q|M,{\displaystyle p\leq q{\big \vert }_{M},}then there exists a seminormP:X→R{\displaystyle P:X\to \mathbb {R} }onX{\displaystyle X}such thatP|M=p{\displaystyle P{\big \vert }_{M}=p}onM{\displaystyle M}andP≤q{\displaystyle P\leq q}onX.{\displaystyle X.} LetS{\displaystyle S}be the convex hull of{m∈M:p(m)≤1}∪{x∈X:q(x)≤1}.{\displaystyle \{m\in M:p(m)\leq 1\}\cup \{x\in X:q(x)\leq 1\}.}BecauseS{\displaystyle S}is anabsorbingdiskinX,{\displaystyle X,}itsMinkowski functionalP{\displaystyle P}is a seminorm. Thenp=P{\displaystyle p=P}onM{\displaystyle M}andP≤q{\displaystyle P\leq q}onX.{\displaystyle X.} So for example, suppose thatf{\displaystyle f}is abounded linear functionaldefined on a vector subspaceM{\displaystyle M}of anormed spaceX,{\displaystyle X,}so its theoperator norm‖f‖{\displaystyle \|f\|}is a non-negative real number. Then the linear functional'sabsolute valuep:=|f|{\displaystyle p:=|f|}is a seminorm onM{\displaystyle M}and the mapq:X→R{\displaystyle q:X\to \mathbb {R} }defined byq(x)=‖f‖‖x‖{\displaystyle q(x)=\|f\|\,\|x\|}is a seminorm onX{\displaystyle X}that satisfiesp≤q|M{\displaystyle p\leq q{\big \vert }_{M}}onM.{\displaystyle M.}TheHahn–Banach theorem for seminormsguarantees the existence of a seminormP:X→R{\displaystyle P:X\to \mathbb {R} }that is equal to|f|{\displaystyle |f|}onM{\displaystyle M}(sinceP|M=p=|f|{\displaystyle P{\big \vert }_{M}=p=|f|}) and is bounded above byP(x)≤‖f‖‖x‖{\displaystyle P(x)\leq \|f\|\,\|x\|}everywhere onX{\displaystyle X}(sinceP≤q{\displaystyle P\leq q}). Hahn–Banach sandwich theorem[3]—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be a sublinear function on a real vector spaceX,{\displaystyle X,}letS⊆X{\displaystyle S\subseteq X}be any subset ofX,{\displaystyle X,}and letf:S→R{\displaystyle f:S\to \mathbb {R} }beanymap. If there exist positive real numbersa{\displaystyle a}andb{\displaystyle b}such that0≥infs∈S[p(s−ax−by)−f(s)−af(x)−bf(y)]for allx,y∈S,{\displaystyle 0\geq \inf _{s\in S}[p(s-ax-by)-f(s)-af(x)-bf(y)]\qquad {\text{ for all }}x,y\in S,}then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }onX{\displaystyle X}such thatF≤p{\displaystyle F\leq p}onX{\displaystyle X}andf≤F≤p{\displaystyle f\leq F\leq p}onS.{\displaystyle S.} Theorem[3](Andenaes, 1970)—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be a sublinear function on a real vector spaceX,{\displaystyle X,}letf:M→R{\displaystyle f:M\to \mathbb {R} }be a linear functional on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such thatf≤p{\displaystyle f\leq p}onM,{\displaystyle M,}and letS⊆X{\displaystyle S\subseteq X}be any subset ofX.{\displaystyle X.}Then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }onX{\displaystyle X}that extendsf,{\displaystyle f,}satisfiesF≤p{\displaystyle F\leq p}onX,{\displaystyle X,}and is (pointwise) maximal onS{\displaystyle S}in the following sense: ifF^:X→R{\displaystyle {\widehat {F}}:X\to \mathbb {R} }is a linear functional onX{\displaystyle X}that extendsf{\displaystyle f}and satisfiesF^≤p{\displaystyle {\widehat {F}}\leq p}onX,{\displaystyle X,}thenF≤F^{\displaystyle F\leq {\widehat {F}}}onS{\displaystyle S}impliesF=F^{\displaystyle F={\widehat {F}}}onS.{\displaystyle S.} IfS={s}{\displaystyle S=\{s\}}is a singleton set (wheres∈X{\displaystyle s\in X}is some vector) and ifF:X→R{\displaystyle F:X\to \mathbb {R} }is such a maximal dominated linear extension off:M→R,{\displaystyle f:M\to \mathbb {R} ,}thenF(s)=infm∈M[f(s)+p(s−m)].{\displaystyle F(s)=\inf _{m\in M}[f(s)+p(s-m)].}[3] Vector–valued Hahn–Banach theorem[3]—IfX{\displaystyle X}andY{\displaystyle Y}are vector spaces over the same field and iff:M→Y{\displaystyle f:M\to Y}is a linear map defined on a vector subspaceM{\displaystyle M}ofX,{\displaystyle X,}then there exists a linear mapF:X→Y{\displaystyle F:X\to Y}that extendsf.{\displaystyle f.} A setΓ{\displaystyle \Gamma }of mapsX→X{\displaystyle X\to X}iscommutative(with respect tofunction composition∘{\displaystyle \,\circ \,}) ifF∘G=G∘F{\displaystyle F\circ G=G\circ F}for allF,G∈Γ.{\displaystyle F,G\in \Gamma .}Say that a functionf{\displaystyle f}defined on a subsetM{\displaystyle M}ofX{\displaystyle X}isΓ{\displaystyle \Gamma }-invariantifL(M)⊆M{\displaystyle L(M)\subseteq M}andf∘L=f{\displaystyle f\circ L=f}onM{\displaystyle M}for everyL∈Γ.{\displaystyle L\in \Gamma .} An invariant Hahn–Banach theorem[29]—SupposeΓ{\displaystyle \Gamma }is acommutative setof continuous linear maps from anormed spaceX{\displaystyle X}into itself and letf{\displaystyle f}be a continuous linear functional defined some vector subspaceM{\displaystyle M}ofX{\displaystyle X}that isΓ{\displaystyle \Gamma }-invariant, which means thatL(M)⊆M{\displaystyle L(M)\subseteq M}andf∘L=f{\displaystyle f\circ L=f}onM{\displaystyle M}for everyL∈Γ.{\displaystyle L\in \Gamma .}Thenf{\displaystyle f}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}that has the sameoperator norm‖f‖=‖F‖{\displaystyle \|f\|=\|F\|}and is alsoΓ{\displaystyle \Gamma }-invariant, meaning thatF∘L=F{\displaystyle F\circ L=F}onX{\displaystyle X}for everyL∈Γ.{\displaystyle L\in \Gamma .} This theorem may be summarized: The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem. Mazur–Orlicz theorem[3]—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be asublinear functionon a real or complex vector spaceX,{\displaystyle X,}letT{\displaystyle T}be any set, and letR:T→R{\displaystyle R:T\to \mathbb {R} }andv:T→X{\displaystyle v:T\to X}be any maps. The following statements are equivalent: The following theorem characterizes whenanyscalar function onX{\displaystyle X}(not necessarily linear) has a continuous linear extension to all ofX.{\displaystyle X.} Theorem(The extension principle[30])—Letf{\displaystyle f}a scalar-valued function on a subsetS{\displaystyle S}of atopological vector spaceX.{\displaystyle X.}Then there exists a continuous linear functionalF{\displaystyle F}onX{\displaystyle X}extendingf{\displaystyle f}if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|∑i=1naif(si)|≤p(∑i=1naisi){\displaystyle \left|\sum _{i=1}^{n}a_{i}f(s_{i})\right|\leq p\left(\sum _{i=1}^{n}a_{i}s_{i}\right)}for all positive integersn{\displaystyle n}and all finite sequencesa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}of scalars and elementss1,…,sn{\displaystyle s_{1},\ldots ,s_{n}}ofS.{\displaystyle S.} LetXbe a topological vector space. A vector subspaceMofXhasthe extension propertyif any continuous linear functional onMcan be extended to a continuous linear functional onX, and we say thatXhas theHahn–Banach extension property(HBEP) if every vector subspace ofXhas the extension property.[31] The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For completemetrizable topological vector spacesthere is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex.[31]On the other hand, a vector spaceXof uncountable dimension, endowed with thefinest vector topology, then this is a topological vector spaces with the Hahn–Banach extension property that is neither locally convex nor metrizable.[31] A vector subspaceMof a TVSXhasthe separation propertyif for every element ofXsuch thatx∉M,{\displaystyle x\not \in M,}there exists a continuous linear functionalf{\displaystyle f}onXsuch thatf(x)≠0{\displaystyle f(x)\neq 0}andf(m)=0{\displaystyle f(m)=0}for allm∈M.{\displaystyle m\in M.}Clearly, the continuous dual space of a TVSXseparates points onXif and only if{0},{\displaystyle \{0\},}has the separation property. In 1992, Kakol proved that any infinite dimensional vector spaceX, there exist TVS-topologies onXthat do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points onX. However, ifXis a TVS theneveryvector subspace ofXhas the extension property if and only ifeveryvector subspace ofXhas the separation property.[31] The proof of theHahn–Banach theorem for real vector spaces(HB) commonly usesZorn's lemma, which in the axiomatic framework ofZermelo–Fraenkel set theory(ZF) is equivalent to theaxiom of choice(AC). It was discovered byŁośandRyll-Nardzewski[12]and independently byLuxemburg[11]thatHBcan be proved using theultrafilter lemma(UL), which is equivalent (underZF) to theBoolean prime ideal theorem(BPI).BPIis strictly weaker than the axiom of choice and it was later shown thatHBis strictly weaker thanBPI.[32] Theultrafilter lemmais equivalent (underZF) to theBanach–Alaoglu theorem,[33]which is another foundational theorem infunctional analysis. Although the Banach–Alaoglu theorem impliesHB,[34]it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger thanHB). However,HBis equivalent toa certain weakened version of the Banach–Alaoglu theoremfor normed spaces.[35]The Hahn–Banach theorem is also equivalent to the following statement:[36] (BPIis equivalent to the statement that there are always non-constant probability charges which take only the values 0 and 1.) InZF, the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set.[37]Moreover, the Hahn–Banach theorem implies theBanach–Tarski paradox.[38] ForseparableBanach spaces, D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL0, a weak subsystem ofsecond-order arithmeticthat takes a form ofKőnig's lemmarestricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example ofreverse mathematics.[39][40] Proofs
https://en.wikipedia.org/wiki/Hahn-Banach_theorem
Ameanis a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers.[1]There are several kinds ofmeans(or "measures ofcentral tendency") inmathematics, especially instatistics. Each attempts to summarize or typify a given group ofdata, illustrating themagnitudeandsignof thedata set. Which of these measures is most illuminating depends on what is being measured, and on context and purpose.[2] Thearithmetic mean, also known as "arithmetic average", is the sum of the values divided by the number of values. The arithmetic mean of a set of numbersx1,x2, ..., xnis typically denoted using anoverhead bar,x¯{\displaystyle {\bar {x}}}.[note 1]If the numbers are from observing asampleof alarger group, the arithmetic mean is termed thesample mean(x¯{\displaystyle {\bar {x}}}) to distinguish it from thegroup mean(orexpected value) of the underlying distribution, denotedμ{\displaystyle \mu }orμx{\displaystyle \mu _{x}}.[note 2][3] Outside probability and statistics, a wide range of other notions of mean are often used ingeometryandmathematical analysis; examples are given below. In mathematics, the three classicalPythagorean meansare thearithmetic mean(AM), thegeometric mean(GM), and theharmonic mean(HM). These means were studied with proportions byPythagoreansand later generations of Greek mathematicians[4]because of their importance in geometry and music. Thearithmetic mean(or simplymeanoraverage) of a list of numbers, is the sum of all of the numbers divided by their count. Similarly, the mean of a samplex1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}, usually denoted byx¯{\displaystyle {\bar {x}}}, is the sum of the sampled values divided by the number of items in the sample. For example, the arithmetic mean of five values: 4, 36, 45, 50, 75 is: Thegeometric meanis an average that is useful for sets of positive numbers, that are interpreted according to their product (as is the case with rates of growth) and not their sum (as is the case with the arithmetic mean): For example, the geometric mean of five values: 4, 36, 45, 50, 75 is: Theharmonic meanis an average which is useful for sets of numbers which are defined in relation to someunit, as in the case ofspeed(i.e., distance per unit of time): For example, the harmonic mean of the five values: 4, 36, 45, 50, 75 is If we have five pumps that can empty a tank of a certain size in respectively 4, 36, 45, 50, and 75 minutes, then the harmonic mean of15{\displaystyle 15}tells us that these five different pumps working together will pump at the same rate as much as five pumps that can each empty the tank in15{\displaystyle 15}minutes. AM, GM, and HM ofnonnegativereal numberssatisfy these inequalities:[5] Equality holds if all the elements of the given sample are equal. Indescriptive statistics, the mean may be confused with themedian,modeormid-range, as any of these may incorrectly be called an "average" (more formally, a measure ofcentral tendency). The mean of a set of observations is the arithmetic average of the values; however, forskewed distributions, the mean is not necessarily the same as the middle value (median), or the most likely value (mode). For example, mean income is typically skewed upwards by a small number of people with very large incomes, so that the majority have an income lower than the mean. By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including theexponentialandPoissondistributions. The mean of aprobability distributionis the long-run arithmetic average value of arandom variablehaving that distribution. If the random variable is denoted byX{\displaystyle X}, then the mean is also known as theexpected valueofX{\displaystyle X}(denotedE(X){\displaystyle E(X)}). For adiscrete probability distribution, the mean is given by∑xP(x){\displaystyle \textstyle \sum xP(x)}, where the sum is taken over all possible values of the random variable andP(x){\displaystyle P(x)}is theprobability mass function. For acontinuous distribution, the mean is∫−∞∞xf(x)dx{\displaystyle \textstyle \int _{-\infty }^{\infty }xf(x)\,dx}, wheref(x){\displaystyle f(x)}is theprobability density function.[7]In all cases, including those in which the distribution is neither discrete nor continuous, the mean is theLebesgue integralof the random variable with respect to itsprobability measure. The mean need not exist or be finite; for some probability distributions the mean is infinite (+∞or−∞), while for others the mean isundefined. Thegeneralized mean, also known as the power mean or Hölder mean, is an abstraction of thequadratic, arithmetic, geometric, and harmonic means. It is defined for a set ofnpositive numbersxiby x¯(m)=(1n∑i=1nxim)1m{\displaystyle {\bar {x}}(m)=\left({\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{m}\right)^{\frac {1}{m}}}[1] By choosing different values for the parameterm, the following types of means are obtained: This can be generalized further as thegeneralizedf-mean and again a suitable choice of an invertiblefwill give Theweighted arithmetic mean(or weighted average) is used if one wants to combine average values from different sized samples of the same population: Wherexi¯{\displaystyle {\bar {x_{i}}}}andwi{\displaystyle w_{i}}are the mean and size of samplei{\displaystyle i}respectively. In other applications, they represent a measure for the reliability of the influence upon the mean by the respective values. Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused byartifacts. In this case, one can use atruncated mean. It involves discarding given parts of the data at the top or the bottom end, typically an equal amount at each end and then taking the arithmetic mean of the remaining data. The number of values removed is indicated as a percentage of the total number of values. Theinterquartile meanis a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values. assuming the values have been ordered, so is simply a specific example of a weighted mean for a specific set of weights. In some circumstances, mathematicians may calculate a mean of an infinite (or even anuncountable) set of values. This can happen when calculating the mean valueyavg{\displaystyle y_{\text{avg}}}of a functionf(x){\displaystyle f(x)}. Intuitively, a mean of a function can be thought of as calculating the area under a section of a curve, and then dividing by the length of that section. This can be done crudely by counting squares on graph paper, or more precisely byintegration. The integration formula is written as: In this case, care must be taken to make sure that the integral converges. But the mean may be finite even if the function itself tends to infinity at some points. Angles, times of day, and other cyclical quantities requiremodular arithmeticto add and otherwise combine numbers. These quantities can be averaged using thecircular mean. In all these situations, it is possible that no mean exists, for example if all points being averaged are equidistant. Consider acolor wheel—there is no mean to the set of all colors. Additionally, there may not be auniquemean for a set of values: for example, when averaging points on a clock, the mean of the locations of 11:00 and 13:00 is 12:00, but this location is equivalent to that of 00:00. TheFréchet meangives a manner for determining the "center" of a mass distribution on asurfaceor, more generally,Riemannian manifold. Unlike many other means, the Fréchet mean is defined on a space whose elements cannot necessarily be added together or multiplied by scalars. It is sometimes also known as theKarcher mean(named after Hermann Karcher). In geometry, there are thousands of different definitions forthe center of a trianglethat can all be interpreted as the mean of a triangular set of points in the plane.[8] This is an approximation to the mean for a moderately skewed distribution.[9]It is used inhydrocarbon explorationand is defined as: whereP10{\textstyle P_{10}},P50{\textstyle P_{50}}andP90{\textstyle P_{90}}are the 10th, 50th and 90th percentiles of the distribution, respectively.
https://en.wikipedia.org/wiki/Mean
Acomputer algebra system(CAS) orsymbolic algebra system(SAS) is anymathematical softwarewith the ability to manipulatemathematical expressionsin a way similar to the traditional manual computations ofmathematiciansandscientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work inalgorithmsovermathematical objectssuch aspolynomials. Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such asnumber theory,group theory, or teaching ofelementary mathematics. General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as: The library must not only provide for the needs of the users, but also the needs of the simplifier. For example, the computation ofpolynomial greatest common divisorsis systematically used for the simplification of expressions involving fractions. This large amount of required computer capabilities explains the small number of general-purpose computer algebra systems. Significant systems includeAxiom,GAP,Maxima,Magma,Maple,Mathematica, andSageMath. In the 1950s, while computers were mainly used for numerical computations, there were some research projects into using them for symbolic manipulation. Computer algebra systems began to appear in the 1960s and evolved out of two quite different sources—the requirements of theoretical physicists and research intoartificial intelligence. A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physicsMartinus Veltman, who designed a program for symbolic mathematics, especially high-energy physics, calledSchoonschip(Dutch for "clean ship") in 1963. Other early systems includeFORMAC. UsingLispas the programming basis,Carl EngelmancreatedMATHLABin 1964 atMITREwithin an artificial-intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 systems running TOPS-10 or TENEX in universities. Today it can still be used onSIMHemulations of the PDP-10. MATHLAB ("mathematicallaboratory") should not be confused withMATLAB("matrixlaboratory"), which is a system for numerical computation built 15 years later at theUniversity of New Mexico. In 1987,Hewlett-Packardintroduced the first hand-held calculator CAS with theHP-28 series.[1]Other early handheld calculators with symbolic algebra capabilities included theTexas InstrumentsTI-89 seriesandTI-92calculator, and theCasioCFX-9970G.[2] The first popular computer algebra systems weremuMATH,Reduce,Derive(based on muMATH), andMacsyma; acopyleftversion of Macsyma is calledMaxima.Reducebecame free software in 2008.[3]Commercial systems includeMathematica[4]andMaple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives includeSageMath(which can act as afront-endto several other free and nonfree CAS). Other significant systems includeAxiom,GAP,MaximaandMagma. The movement to web-based applications in the early 2000s saw the release ofWolframAlpha, an online search engine and CAS which includes the capabilities ofMathematica.[5] More recently, computer algebra systems have been implemented usingartificial neural networks, though as of 2020 they are not commercially available.[6] The symbolic manipulations supported typically include: In the above, the wordsomeindicates that the operation cannot always be performed. Many also include: Some include: Some computer algebra systems focus on specialized disciplines; these are typically developed in academia and are free. They can be inefficient for numeric operations as compared tonumeric systems. The expressions manipulated by the CAS typically includepolynomialsin multiple variables; standard functions of expressions (sine,exponential, etc.); various special functions (Γ,ζ,erf,Bessel functions, etc.); arbitrary functions of expressions; optimization; derivatives, integrals, simplifications, sums, and products of expressions; truncatedserieswith expressions as coefficients,matricesof expressions, and so on. Numeric domains supported typically includefloating-point representation of real numbers,integers(of unbounded size),complex(floating-point representation),interval representation of reals,rational number(exact representation) andalgebraic numbers. There have been many advocates for increasing the use of computer algebra systems in primary and secondary-school classrooms. The primary reason for such advocacy is that computer algebra systems represent real-world math more than do paper-and-pencil or hand calculator based mathematics.[12]This push for increasing computer usage in mathematics classrooms has been supported by some boards of education. It has even been mandated in the curriculum of some regions.[13] Computer algebra systems have been extensively used in higher education.[14][15]Many universities offer either specific courses on developing their use, or they implicitly expect students to use them for their course work. The companies that develop computer algebra systems have pushed to increase their prevalence among university and college programs.[16][17] CAS-equipped calculators are not permitted on theACT, thePLAN, and in some classrooms[18]though it may be permitted on all ofCollege Board's calculator-permitted tests, including theSAT, someSAT Subject Testsand theAP Calculus,Chemistry,Physics, andStatisticsexams.[19]
https://en.wikipedia.org/wiki/Computer_algebra_system
Inmathematical optimizationanddecision theory, aloss functionorcost function(sometimes also called an error function)[1]is a function that maps aneventor values of one or more variables onto areal numberintuitively representing some "cost" associated with the event. Anoptimization problemseeks to minimize a loss function. Anobjective functionis either a loss function or its opposite (in specific domains, variously called areward function, aprofit function, autility function, afitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used forparameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old asLaplace, was reintroduced in statistics byAbraham Waldin the middle of the 20th century.[2]In the context ofeconomics, for example, this is usuallyeconomic costorregret. Inclassification, it is the penalty for an incorrect classification of an example. Inactuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works ofHarald Cramérin the 1920s.[3]Inoptimal control, the loss is the penalty for failing to achieve a desired value. Infinancial risk management, the function is mapped to a monetary loss. Leonard J. Savageargued that using non-Bayesian methods such asminimax, the loss function should be based on the idea ofregret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. The use of aquadraticloss function is common, for example when usingleast squarestechniques. It is often more mathematically tractable than other loss functions because of the properties ofvariances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target ist, then a quadratic loss function is for some constantC; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as thesquared error loss(SEL).[1] Many commonstatistics, includingt-tests,regressionmodels,design of experiments, and much else, useleast squaresmethods applied usinglinear regressiontheory, which is based on the quadratic loss function. The quadratic loss function is also used inlinear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as aquadratic formin the deviations of the variables of interest from their desired values; this approach istractablebecause it results in linearfirst-order conditions. In the context ofstochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like theHuber, Log-Cosh and SMAE losses are used when the data has many large outliers. Instatisticsanddecision theory, a frequently used loss function is the0-1 loss function usingIverson bracketnotation, i.e. it evaluates to 1 wheny^≠y{\displaystyle {\hat {y}}\neq y}, and 0 otherwise. In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called alsoutilityfunction) in a form suitable for optimization — the problem thatRagnar Frischhas highlighted in hisNobel Prizelecture.[4]The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[5][6]In particular,Andranik Tangianshowed that the most usable objective functions — quadratic and additive — are determined by a fewindifferencepoints. He used this property in the models for constructing these objective functions from eitherordinalorcardinaldata that were elicited through computer-assisted interviews with decision makers.[7][8]Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[9]and the European subsidies for equalizing unemployment rates among 271 German regions.[10] In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variableX. BothfrequentistandBayesianstatistical theory involve making a decision based on theexpected valueof the loss function; however, this quantity is defined differently under the two paradigms. We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to theprobability distribution,Pθ, of the observed data,X. This is also referred to as therisk function[11][12][13][14]of the decision ruleδand the parameterθ. Here the decision rule depends on the outcome ofX. The risk function is given by: Here,θis a fixed but possibly unknown state of nature,Xis a vector of observations stochastically drawn from apopulation,Eθ{\displaystyle \operatorname {E} _{\theta }}is the expectation over all population values ofX,dPθis aprobability measureover the event space ofX(parametrized byθ) and the integral is evaluated over the entiresupportofX. In a Bayesian approach, the expectation is calculated using theprior distributionπ*of the parameterθ: where m(x) is known as thepredictive likelihoodwherein θ has been "integrated out,"π*(θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the actiona*which minimises this expected loss, which is referred to asBayes Risk. In the latter equation, the integrand inside dx is known as thePosterior Risk, and minimising it with respect to decisionaalso minimizes the overall Bayes Risk. This optimal decision,a*is known as theBayes (decision) Rule- it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. In economics, decision-making under uncertainty is often modelled using thevon Neumann–Morgenstern utility functionof the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. Adecision rulemakes a choice using an optimality criterion. Some commonly used criteria are: Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15] A common example involves estimating "location". Under typical statistical assumptions, themeanor average is the statistic for estimating location that minimizes the expected loss experienced under thesquared-errorloss function, while themedianis the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent isrisk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. Forrisk-averseorrisk-lovingagents, loss is measured as the negative of autility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for examplemortalityormorbidityin the field ofpublic healthorsafety engineering. For mostoptimization algorithms, it is desirable to have a loss function that is globallycontinuousanddifferentiable. Two very commonly used loss functions are thesquared loss,L(a)=a2{\displaystyle L(a)=a^{2}}, and theabsolute loss,L(a)=|a|{\displaystyle L(a)=|a|}. However the absolute loss has the disadvantage that it is not differentiable ata=0{\displaystyle a=0}. The squared loss has the disadvantage that it has the tendency to be dominated byoutliers—when summing over a set ofa{\displaystyle a}'s (as in∑i=1nL(ai){\textstyle \sum _{i=1}^{n}L(a_{i})}), the final sum tends to be the result of a few particularly largea-values, rather than an expression of the averagea-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16]Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case ofi.i.d.observations, the principle of complete information, and some others. W. Edwards DemingandNassim Nicholas Talebargue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
https://en.wikipedia.org/wiki/Loss_function#L1_loss
Anexistential clauseis aclausethat refers to the existence or presence of something, such as "There is a God" and "There are boys in the yard". The use of such clauses can be considered analogous toexistential quantificationin predicate logic, which is often expressed with the phrase "There exist(s)...". Different languages have different ways of forming and using existential clauses. For details on theEnglishforms, seeEnglish grammar:Thereas pronoun. Many languages form existential clauses without any particular marker by simply using forms of the normalcopulaverb (the equivalent of Englishbe), thesubjectbeing the noun (phrase) referring to the thing whose existence is asserted. For example, theFinnishsentencePihalla on poikia, meaning "There are boys in the yard", is literally "On the yard is boys". Some languages have a different verb for that purpose:SwedishfinnashasDet finns pojkar på gården, literally "It is found boys on the yard". On the other hand, some languages do not require a copula at all, and sentences analogous to "In the yard boys" are used. Some languages use the verbhave; for exampleSerbo-CroatianU dvorištu ima dječakais literally "In the yard has boys".[1] Some languages form the negative of existential clauses irregularly; for example, inRussian,естьyest("there is/are") is used in affirmative existential clauses (in the present tense), but the negative equivalent isнетnyet("there is/are not"), used with the logical subject in thegenitive case. In English, existential clauses usually use thedummy subjectconstruction (also known as expletive) withthere(infinitive: there be), as in "There are boys in the yard", butthereis sometimes omitted when the sentence begins with anotheradverbial(usually designating a place), as in "In my room (there) is a large box." Other languages with constructions similar to the English dummy subject includeFrench(seeil y a) andGerman, which useses ist,es sindores gibt, literally "it is", "it are", "it gives". The principal meaning of existential clauses is to refer to the existence of something or the presence of something in a particular place or time. For example, "There is a God" asserts the existence of a God, but "There is a pen on the desk" asserts the presence or existence of a pen in a particular place. Existential clauses can be modified like other clauses in terms oftense,negation,interrogative inversion,modality,finiteness, etc. For example, one can say "There was a God", "There is not a God" ("There is no God"), "Is there a God?", "There might be a God", "He was anxious for there to be a God" etc. An existential sentence is one of four structures associated within thePingelapese languageofMicronesia. The form heavily uses a post-verbal subject order and explains what exists or does not exist. Only a few Pingelapese verbs are used existential sentence structure:minae-"to exist",soh-"not to exist",dir-"to exist in large numbers", anddaeri-"to be finished". All four verbs have a post-verbal subject in common and usually introduce new characters to a story. If a character is already known, the verb would be used in the preverbal position.[2] In some languages, linguisticpossession(in a broad sense) is indicated by existential clauses, rather than by a verb likehave. For example, inRussian, "I have a friend" can be expressed by the sentence у меня есть другu menya yest' drug, literally "at me there is a friend". Russian has a verb иметьimet'meaning "have", but it is less commonly used than the former method. Other examples includeIrishTá peann agam"(There) is (a) pen at me" (for "I have a pen”).HungarianVan egy halam"(There) is a fish-my" (for "I have a fish") andTurkishİki defterim var"two notebook-my (there) is" (for "I have two notebooks"). InMaltese, a change over time has been noted: "in the possessive construction, subject properties have been transferred diachronically from the possessed noun phrase to the possessor, while the possessor has all the subject properties except the form of the verb agreement that it triggers."[3]
https://en.wikipedia.org/wiki/Existential_clause
Unified Extensible Firmware Interface(UEFI,/ˈjuːɪfaɪ/or as an acronym)[c]is aspecificationfor the firmwarearchitectureof acomputing platform. When a computeris powered on, the UEFI-implementation is typically the first that runs, before starting theoperating system. Examples includeAMI Aptio,Phoenix SecureCore,TianoCore EDK II,InsydeH2O. UEFI replaces theBIOSthat was present in theboot ROMof allpersonal computersthat areIBM PC compatible,[5][6]although it can providebackwards compatibilitywith the BIOS usingCSM booting. Unlike its predecessor, BIOS, which is ade factostandard originally created byIBMas proprietary software, UEFI is an open standard maintained by an industryconsortium. Like BIOS, most UEFI implementations are proprietary. Inteldeveloped the originalExtensible Firmware Interface(EFI) specification. The last Intel version of EFI was 1.10 released in 2005. Subsequent versions have been developed as UEFI by theUEFI Forum. UEFI is independent of platform and programming language, butCis used for the reference implementation TianoCore EDKII. The original motivation for EFI came during early development of the first Intel–HPItaniumsystems in the mid-1990s.BIOSlimitations (such as 16-bitreal mode, 1 MB addressable memory space,[7]assembly languageprogramming, andPC AThardware) had become too restrictive for the larger server platforms Itanium was targeting.[8]The effort to address these concerns began in 1998 and was initially calledIntel Boot Initiative.[9]It was later renamed toExtensible Firmware Interface(EFI).[10][11] The firstopen sourceUEFI implementation, Tiano, was released by Intel in 2004. Tiano has since then been superseded by EDK[12]and EDK II[13]and is now maintained by the TianoCore community.[14] In July 2005, Intel ceased its development of the EFI specification at version 1.10, and contributed it to theUnified EFI Forum, which has developed the specification as theUnified Extensible Firmware Interface(UEFI). The original EFI specification remains owned by Intel, which exclusively provides licenses for EFI-based products, but the UEFI specification is owned by the UEFI Forum.[8][15] Version 2.0 of the UEFI specification was released on 31 January 2006. It addedcryptographyand security. Version 2.1 of the UEFI specification was released on 7 January 2007. It added network authentication and theuser interfacearchitecture ('Human Interface Infrastructure' in UEFI). In October 2018, Arm announcedArm ServerReady, a compliance certification program for landing the generic off-the-shelf operating systems andhypervisorson Arm-based servers. The program requires the system firmware to comply with Server Base Boot Requirements (SBBR). SBBR requires UEFI,ACPIandSMBIOScompliance. In October 2020, Arm announced the extension of the program to theedgeandIoTmarket. The new program name isArm SystemReady. Arm SystemReady defined the Base Boot Requirements (BBR) specification that currently provides three recipes, two of which are related to UEFI: 1) SBBR: which requires UEFI, ACPI and SMBIOS compliance suitable for enterprise level operating environments such as Windows, Red Hat Enterprise Linux, and VMware ESXi; and 2) EBBR: which requires compliance to a set of UEFI interfaces as defined in the Embedded Base Boot Requirements (EBBR) suitable for embedded environments such as Yocto. Many Linux and BSD distros can support both recipes. In December 2018,Microsoftannounced Project Mu, a fork of TianoCore EDK II used inMicrosoft SurfaceandHyper-Vproducts. The project promotes the idea offirmware as a service.[16] The latest UEFI specification, version 2.11, was published in December 2024.[17] The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a BIOS:[18] With UEFI, it is possible to store product keys for operating systems such as Windows, on the UEFI firmware of the device.[21][22][23]UEFI is required forSecure Booton devices shipping with Windows 8[24][25]and above. It is also possible for operating systems to access UEFI configuration data.[26] As of version 2.5, processor bindings exist for Itanium, x86, x86-64,ARM(AArch32) andARM64(AArch64).[27]Onlylittle-endianprocessors can be supported.[28]Unofficial UEFI support is under development for POWERPC64 by implementingTianoCore[broken anchor]on top of OPAL,[29]the OpenPOWER abstraction layer, running in little-endian mode.[30]Similar projects exist forMIPS[31]andRISC-V.[32]As of UEFI 2.7, RISC-V processor bindings have been officially established for 32-, 64- and 128-bit modes.[33] Standard PC BIOS is limited to a 16-bit processor mode and 1 MB of addressable memory space, resulting from the design based on theIBM 5150that used a 16-bitIntel 8088processor.[8][34]In comparison, the processor mode in a UEFI environment can be either 32-bit (IA-32, AArch32) or 64-bit (x86-64, Itanium, and AArch64).[8][35]64-bit UEFI firmware implementations supportlong mode, which allows applications in the preboot environment to use 64-bit addressing to get direct access to all of the machine's memory.[36] UEFI requires the firmware and operating system loader (or kernel) to be size-matched; that is, a 64-bit UEFI firmware implementation can load only a 64-bit operating system (OS) boot loader or kernel (unless the CSM-basedlegacy bootis used) and the same applies to 32-bit. After the system transitions fromboot servicestoruntime services, the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of the runtime services (unless the kernel switches back again).[37]: sections 2.3.2 and 2.3.4As of version 3.15, theLinux kernelsupports 64-bit kernels to bebootedon 32-bit UEFI firmware implementations running onx86-64CPUs, withUEFI handoversupport from a UEFI boot loader as the requirement.[38]UEFI handover protocoldeduplicatesthe UEFI initialization code between the kernel and UEFI boot loaders, leaving the initialization to be performed only by the Linux kernel'sUEFI boot stub.[39][40] In addition to the standard PC disk partition scheme that uses amaster boot record(MBR), UEFI also works with theGUID Partition Table(GPT) partitioning scheme, which is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to fourprimary partitionsper disk, and up to 2TB(2 × 240bytes)per disk) are relaxed.[41]More specifically, GPT allows for a maximum disk and partition size of 8ZiB(8 × 270bytes).[42][43] Support for GPT inLinuxis enabled by turning on the optionCONFIG_EFI_PARTITION(EFI GUID Partition Support) during kernel configuration.[44]This option allows Linux to recognize and use GPT disks after the system firmware passes control over the system to Linux. For reverse compatibility, Linux can use GPT disks in BIOS-based systems for both data storage and booting, as bothGRUB 2and Linux are GPT-aware. Such a setup is usually referred to asBIOS-GPT.[45][unreliable source?]As GPT incorporates the protective MBR, a BIOS-based computer can boot from a GPT disk using a GPT-aware boot loader stored in the protective MBR'sbootstrap code area.[43]In the case of GRUB, such a configuration requires aBIOS boot partitionfor GRUB to embed its second-stage code due to absence of the post-MBR gap in GPT partitioned disks (which is taken over by the GPT'sPrimary HeaderandPrimary Partition Table). Commonly 1MBin size, this partition'sGlobally Unique Identifier(GUID) in GPT scheme is21686148-6449-6E6F-744E-656564454649and is used by GRUB only in BIOS-GPT setups. From GRUB's perspective, no such partition type exists in case of MBR partitioning. This partition is not required if the system is UEFI-based because no embedding of the second-stage code is needed in that case.[19][43][45] UEFI systems can access GPT disks and boot directly from them, which allows Linux to use UEFI boot methods. Booting Linux from GPT disks on UEFI systems involves creation of anEFI system partition(ESP), which contains UEFI applications such as bootloaders, operating system kernels, and utility software.[46][47][48][unreliable source?]Such a setup is usually referred to asUEFI-GPT, while ESP is recommended to be at least 512 MB in size and formatted with a FAT32 filesystem for maximum compatibility.[43][45][49][unreliable source?] Forbackward compatibility, some UEFI implementations also support booting from MBR-partitioned disks through the Compatibility Support Module (CSM) that provides legacy BIOS compatibility.[50]In that case, booting Linux on UEFI systems is the same as on legacy BIOS-based systems. Some of the EFI's practices and data formats mirror those ofMicrosoft Windows.[51][52] The 64-bit versions ofWindows VistaSP1 and later and 64-bit versions ofWindows 8,8.1,10, and11can boot from a GPT disk that is larger than 2TB. EFI defines two types of services:boot servicesandruntime services. Boot services are available only while the firmware owns the platform (i.e., before theExitBootServices()call), and they include text and graphical consoles on various devices, and bus, block and file services. Runtime services are still accessible while the operating system is running; they include services such as date, time andNVRAMaccess. Beyond loading an OS, UEFI can runUEFI applications, which reside as files on theEFI system partition. They can be executed from the UEFI Shell, by the firmware'sboot manager, or by other UEFI applications.UEFI applicationscan be developed and installed independently of theoriginal equipment manufacturers(OEMs). A type of UEFI application is an OS boot loader such asGRUB,rEFInd,Gummiboot, andWindows Boot Manager, which loads some OS files into memory and executes them. Also, an OS boot loader can provide a user interface to allow the selection of another UEFI application to run. Utilities like the UEFI Shell are also UEFI applications. EFI defines protocols as a set of software interfaces used for communication between two binary modules. All EFI drivers must provide services to others via protocols. The EFI Protocols are similar to theBIOS interrupt calls. In addition to standardinstruction set architecture-specific device drivers, EFI provides for a ISA-independentdevice driverstored innon-volatile memoryasEFI byte codeorEBC. System firmware has an interpreter for EBC images. In that sense, EBC is analogous toOpen Firmware, the ISA-independent firmware used inPowerPC-basedApple MacintoshandSun MicrosystemsSPARCcomputers, among others. Some architecture-specific (non-EFI Byte Code) EFI drivers for some device types can have interfaces for use by the OS. This allows the OS to rely on EFI for drivers to perform basic graphics and network functions before, and if, operating-system-specific drivers are loaded. In other cases, the EFI driver can be filesystem drivers that allow for booting from other types of disk volumes. Examples includeefifsfor 37 file systems (based onGRUB2code),[56]used byRufusfor chain-loading NTFS ESPs.[57] The EFI 1.0 specification defined a UGA (Universal Graphic Adapter) protocol as a way to support graphics features. UEFI did not include UGA and replaced it withGOP (Graphics Output Protocol).[58] UEFI 2.1 defined a "Human Interface Infrastructure" (HII) to manage user input, localized strings, fonts, and forms (in theHTMLsense). These enableoriginal equipment manufacturers(OEMs) orindependent BIOS vendors(IBVs) to design graphical interfaces for pre-boot configuration. UEFI usesUTF-16to encode strings by default. Most early UEFI firmware implementations were console-based. Today many UEFI firmware implementations are GUI-based. An EFI system partition, often abbreviated to ESP, is adata storage devicepartition that is used in computers adhering to the UEFI specification. Accessed by the UEFI firmware when a computer is powered up, it stores UEFI applications and the files these applications need to run, including operating systemboot loaders. Supportedpartition tableschemes includeMBRandGPT, as well asEl Toritovolumes on optical discs.[37]: section 2.6.2For use on ESPs, UEFI defines a specific version of theFAT file system, which is maintained as part of the UEFI specification and independently from the original FAT specification, encompassing theFAT32,FAT16andFAT12file systems.[37]: section 12.3[59][60][61]The ESP also provides space for a boot sector as part of the backward BIOS compatibility.[50] Unlike the legacy PC BIOS, UEFI does not rely onboot sectors, defining instead a boot manager as part of the UEFI specification. When a computer is powered on, the boot manager checks the boot configuration and, based on its settings, then executes the specified OSboot loaderoroperating system kernel(usually boot loader[62]). The boot configuration is defined by variables stored inNVRAM, including variables that indicate the file system paths to OS loaders or OS kernels. OS boot loaders can be automatically detected by UEFI, which enables easybootingfrom removable devices such asUSB flash drives. This automated detection relies on standardized file paths to the OS boot loader, with the path varying depending on thecomputer architecture. The format of the file path is defined as<EFI_SYSTEM_PARTITION>\EFI\BOOT\BOOT<MACHINE_TYPE_SHORT_NAME>.EFI; for example, the file path to the OS loader on anx86-64system is\efi\boot\bootx64.efi,[37]and\efi\boot\bootaa64.efion ARM64 architecture. Booting UEFI systems from GPT-partitioned disks is commonly calledUEFI-GPT booting. Despite the fact that the UEFI specification requires MBR partition tables to be fully supported,[37]some UEFI firmware implementations immediately switch to the BIOS-based CSM booting depending on the type of boot disk's partition table, effectively preventing UEFI booting to be performed fromEFI System Partitionon MBR-partitioned disks.[50]Such a boot scheme is commonly calledUEFI-MBR. It is also common for a boot manager to have a textual user interface so the user can select the desired OS (or setup utility) from a list of available boot options. On PC platforms, the BIOS firmware that supports UEFI boot can be calledUEFI BIOS, although it may not support CSM boot method, as modern x86 PCs deprecated use of CSM. To ensure backward compatibility, UEFI firmware implementations on PC-class machines could support booting in legacy BIOS mode from MBR-partitioned disks through theCompatibility Support Module (CSM)that provides legacy BIOS compatibility. In this scenario, booting is performed in the same way as on legacy BIOS-based systems, by ignoring the partition table and relying on the content of aboot sector.[50] BIOS-style booting from MBR-partitioned disks is commonly calledBIOS-MBR, regardless of it being performed on UEFI or legacy BIOS-based systems. Furthermore, booting legacy BIOS-based systems from GPT disks is also possible, and such a boot scheme is commonly calledBIOS-GPT. TheCompatibility Support Moduleallows legacy operating systems and some legacyoption ROMsthat do not support UEFI to still be used.[63]It also provides required legacySystem Management Mode(SMM) functionality, calledCompatibilitySmm, as an addition to features provided by the UEFI SMM. An example of such a legacy SMM functionality is providing USB legacy support for keyboard and mouse, by emulating their classicPS/2counterparts.[63] In November 2017, Intel announced that it planned to phase out support CSM for client platforms by 2020.[64] In July, of 2022, Kaspersky Labs published information regarding a Rootkit designed to chain boot malicious code on machines using Intel's H81 chipset and the Compatibility Support module of affected motherboards.[65] In August 2023, Intel announced that it planned to phase out support CSM for server platforms by 2024.[66] Currently[when?]most computers based on Intel platforms do not support CSM.[citation needed] The UEFI specification includes support for booting over network via thePreboot eXecution Environment(PXE). PXE bootingnetwork protocolsincludeInternet Protocol(IPv4andIPv6),User Datagram Protocol(UDP),Dynamic Host Configuration Protocol(DHCP),Trivial File Transfer Protocol(TFTP) andiSCSI.[37][67] OS images can be remotely stored onstorage area networks(SANs), withInternet Small Computer System Interface(iSCSI) andFibre Channel over Ethernet(FCoE) as supported protocols for accessing the SANs.[37][68][69] Version 2.5 of the UEFI specification adds support for accessing boot images overHTTP.[70] The UEFI specification defines a protocol known asSecure Boot, which can secure the boot process by preventing the loading of UEFI drivers or OS boot loaders that are notsignedwith an acceptabledigital signature. The details of how these drivers are signed is specified in theUEFI Specification[71]When Secure Boot is enabled, it is initially placed in "setup" mode, which allows a public key known as the "platform key" (PK) to be written to the firmware. Once the key is written, Secure Boot enters "User" mode, where only UEFI drivers and OS boot loaders signed with the platform key can be loaded by the firmware. Additional "key exchange keys" (KEK) can be added to a database stored in memory to allow other certificates to be used, but they must still have a connection to the private portion of the platform key.[72]Secure Boot can also be placed in "Custom" mode, where additional public keys can be added to the system that do not match the private key.[73] Secure Boot is supported byWindows 8and8.1,Windows Server 2012and 2012 R2,Windows 10,Windows Server 2016,2019, and2022, andWindows 11, VMware vSphere 6.5[74]and a number ofLinux distributionsincludingFedora(since version 18),openSUSE(since version 12.3), RHEL (since version 7), CentOS (since version 7[75]), Debian (since version 10),[76]Ubuntu(since version 12.04.2),Linux Mint(since version 21.3).,[77][78]andAlmaLinux OS(since version 8.4[79]). As of January 2025[update],FreeBSDsupport is in a planning stage.[80] UEFI provides ashell environment, which can be used to execute other UEFI applications, including UEFIboot loaders.[48]Apart from that, commands available in the UEFI shell can be used for obtaining various other information about the system or the firmware, including getting the memory map (memmap), modifying boot manager variables (bcfg), running partitioning programs (diskpart), loading UEFI drivers, and editing text files (edit).[81][unreliable source?][82][83] Source code for a UEFI shell can be downloaded from theIntel'sTianoCore[broken anchor]UDK/EDK2 project.[84]A pre-built ShellBinPkg is also available.[85]Shell v2 works best in UEFI 2.3+ systems and is recommended over Shell v1 in those systems. Shell v1 should work in all UEFI systems.[81][86][87] Methods used for launching UEFI shell depend on the manufacturer and model of the systemmotherboard. Some of them already provide a direct option in firmware setup for launching, e.g. compiled x86-64 version of the shell needs to be made available as<EFI_SYSTEM_PARTITION>/SHELLX64.EFI. Some other systems have an already embedded UEFI shell which can be launched by appropriate key press combinations.[88][unreliable source?][89]For other systems, the solution is either creating an appropriate USB flash drive or adding manually (bcfg) a boot option associated with the compiled version of shell.[83][88][90][unreliable source?][91][unreliable source?] The following is a list ofcommandssupported by the EFI shell.[82] Extensions to UEFI can be loaded from virtually anynon-volatilestorage device attached to the computer. For example, anoriginal equipment manufacturer(OEM) can distribute systems with anEFI system partitionon the hard drive, which would add additional functions to the standard UEFI firmware stored on the motherboard'sROM. UEFI Capsule defines a Firmware-to-OS firmware update interface, marketed as modern and secure.[92]Windows 8,Windows 8.1,Windows 10,[93]andFwupdfor Linux each support the UEFI Capsule. LikeBIOS, UEFI initializes and tests system hardware components (e.g. memory training, PCIe link training, USB link training on typical x86 systems), and then loads theboot loaderfrom amass storage deviceor through anetwork connection. Inx86systems, the UEFI firmware is usually stored in theNOR flashchip of the motherboard.[94][95]In some ARM-based Android and Windows Phone devices, the UEFI boot loader is stored in theeMMCoreUFSflash memory. UEFI machines can have one of the following classes, which were used to help ease the transition to UEFI:[96] Starting from the 10th Gen Intel Core, Intel no longer provides LegacyVideo BIOSfor the iGPU (Intel Graphics Technology). Legacy boot with those CPUs requires a Legacy Video BIOS, which can still be provided by a video card.[citation needed] This is the first stage of the UEFI boot but may have platform specific binary code that precedes it. (e.g.,Intel ME,AMD PSP, CPUmicrocode). It consists of minimal code written inassembly languagefor the specific architecture. It initializes a temporary memory (often CPU cache-as-RAM (CAR), or SoC on-chip SRAM) and serves as the system's software root of trust with the option of verifying PEI before hand-off. The second stage of UEFI boot consists of a dependency-aware dispatcher that loads and runs PEI modules (PEIMs) to handle early hardware initialization tasks such asmain memoryinitialization (initializememory controllerandDRAM) and firmware recovery operations. Additionally, it is responsible for discovery of the current boot mode and handling many ACPI S3 operations. In the case of ACPI S3 resume, it is responsible for restoring many hardware registers to a pre-sleep state. PEI also uses CAR. Initialization at this stage involves creating data structures in memory and establishing default values within these structures.[98] This stage has several components including PEI foundation, PEIMs and PPI. Due less resources available in this stage, this stage must be minimal and do minimal preparations for the next stage(DXE), Which is more richer. After SEC phase hand off, platform responsibility is taken by PEI Foundation. it's responsibility is: This component is responsible for invoking PEIMs and managing there dependencies. Those are minimal PEI drivers that is responsible for initialization of the hardware like permanent memory, CPU, chipset and motherboard. Each PEIMs has single responsibility and focused on single initialization. Those drivers came from different vendors. This is adata structurethat composed of GUID pairs of pointers. PPIs are discovered by PEIMs through PEI services. After minimal initialization of the system for DXE, PEI foundation locates and passes control to DXE. The PEI foundation dispatches DXE foundation through special PPI called IPL(Initial Program Load). This stage consist of C modules and a dependency-aware dispatcher. With main memory now available, CPU, chipset, mainboard and other I/O devices are initialized in DXE and BDS. Initialization at this stage involves assigning EFI device paths to the hardware connected to the motherboard, and transferring configuration data to the hardware.[99] BDS is a part of the DXE.[100][101]In this stage, boot devices are initialized, UEFI drivers orOption ROMsof PCI devices are executed according to architecturally defined variables calledNVRAM. This is the stage between boot device selection and hand-off to the OS. At this point one may enter a UEFI shell, or execute a UEFI application such as the OS boot loader. The UEFI hands off to theoperating system(OS) afterExitBootServices()is executed. A UEFI compatible OS is now responsible for exiting boot services triggering the firmware to unload all no longer needed code and data, leaving only runtime services code/data, e.g.SMMandACPI.[102][failed verification]A typical modern OS will prefer to use its own programs (such askernel drivers) to control hardware devices. When a legacy OS is used, CSM will handle this call ensuring the system is compatible with legacy BIOS expectations. Intel's implementation of EFI is theIntel Platform Innovation Framework, codenamedTiano. Tiano runs on Intel'sXScale,Itanium,IA-32andx86-64processors, and is proprietary software, although a portion of the code has been released under theBSD licenseorEclipse Public License(EPL) asTianoCore EDK II. TianoCore can be used as a payload forcoreboot.[103] Phoenix Technologies' implementation of UEFI is branded as SecureCore Technology (SCT).[104]American Megatrendsoffers its own UEFI firmware implementation known as Aptio,[105]whileInsyde Softwareoffers InsydeH2O,[106]and Byosoft offers ByoCore. In December 2018,Microsoftreleased an open source version of its TianoCore EDK2-based UEFI implementation from theSurfaceline,Project Mu.[107] An implementation of the UEFI API was introduced into the Universal Boot Loader (Das U-Boot) in 2017.[108]On theARMv8architectureLinuxdistributions use the U-Boot UEFI implementation in conjunction withGNU GRUBfor booting (e.g.SUSE Linux[109]), the same holds true for OpenBSD.[110]For booting from iSCSIiPXEcan be used as a UEFI application loaded by U-Boot.[111] Intel's firstItaniumworkstations and servers, released in 2000, implemented EFI 1.02. Hewlett-Packard's firstItanium 2systems, released in 2002, implemented EFI 1.10; they were able to bootWindows,Linux,FreeBSDandHP-UX;OpenVMSadded UEFI capability in June 2003. In January 2006,Apple Inc.shipped its firstIntel-based Macintosh computers. These systems used EFI instead ofOpen Firmware, which had been used on its previous PowerPC-based systems.[112]On 5 April 2006, Apple first releasedBoot Camp, which produces a Windows drivers disk and a non-destructive partitioning tool to allow the installation of Windows XP or Vista without requiring a reinstallation of Mac OS X (now macOS). A firmware update was also released that added BIOS compatibility to its EFI implementation. Subsequent Macintosh models shipped with the newer firmware.[113] During 2005, more than one million Intel systems shipped with Intel's implementation of UEFI.[114][failed verification]New mobile, desktop and server products, using Intel's implementation of UEFI, started shipping in 2006. For instance, boards that use the Intel 945 chipset series use Intel's UEFI firmware implementation. Since 2005, EFI has also been implemented on non-PC architectures, such asembedded systemsbased onXScalecores.[114] The EDK (EFI Developer Kit) includes an NT32 target, which allows EFI firmware and EFI applications to run within aWindowsapplication. But no direct hardware access is allowed by EDK NT32. This means only a subset of EFI application and drivers can be executed by the EDK NT32 target. In 2008, more x86-64 systems adopted UEFI. While many of these systems still allow booting only the BIOS-based OSes via the Compatibility Support Module (CSM) (thus not appearing to the user to be UEFI-based), other systems started to allow booting UEFI-based OSes. For example, IBM x3450 server,MSImotherboards with ClickBIOS, HP EliteBook Notebook PCs. In 2009, IBM shippedSystem xmachines (x3550 M2, x3650 M2, iDataPlex dx360 M2) andBladeCenterHS22 with UEFI capability. Dell shipped PowerEdge T610, R610, R710, M610 and M710 servers with UEFI capability. More commercially available systems are mentioned in a UEFI whitepaper.[115] In 2011, major vendors (such asASRock,Asus,Gigabyte, andMSI) launched several consumer-oriented motherboards using the Intel6-seriesLGA 1155chipset and AMD 9 SeriesAM3+chipsets with UEFI.[116] With the release of Windows 8 in October 2012, Microsoft's certification requirements now require that computers include firmware that implements the UEFI specification. Furthermore, if the computer supports the "Connected Standby" feature of Windows 8 (which allows devices to have power management comparable tosmartphones, with an almost instantaneous return from standby mode), then the firmware is not permitted to contain a Compatibility Support Module (CSM). As such, systems that support Connected Standby are incapable of booting Legacy BIOS operating systems.[117][118] In October 2017, Intel announced that it would remove legacy PC BIOS support from all its products by 2020, in favor of UEFI Class 3.[119]By 2019, all computers based on Intel platforms no longer have legacy PC BIOS support. An operating system that can be booted from a (U)EFI is called a (U)EFI-aware operating system, defined by (U)EFI specification. Here the termbooted from a (U)EFImeans directly booting the system using a (U)EFI operating system loader stored on any storage device. The default location for the operating system loader is<EFI_SYSTEM_PARTITION>/BOOT/BOOT<MACHINE_TYPE_SHORT_NAME>.EFI, where short name of the machine type can beIA32,X64,IA64,ARMorAA64.[37]Some operating systems vendors may have their own boot loaders. They may also change the default boot location. EDK2 Application Development Kit(EADK) makes it possible to usestandard C libraryfunctions in UEFI applications. EADK can be freely downloaded from theIntel's TianoCore UDK / EDK2SourceForgeproject. As an example, a port of thePythoninterpreter is made available as a UEFI application by using the EADK.[158]The development has moved to GitHub since UDK2015.[159] A minimalistic "hello, world" C program written using EADK looks similar to itsusual C counterpart: Numerous digital rights activists have protested UEFI.Ronald G. Minnich, a co-author ofcoreboot, andCory Doctorow, a digital rights activist, have criticized UEFI as an attempt to remove the ability of the user to truly control the computer.[160][161]It does not solve the BIOS's long-standing problems of requiring two different drivers—one for the firmware and one for the operating system—for most hardware.[162] Open-source project TianoCore also provides UEFIs.[163]TianoCore lacks the specialized firmware drivers and modules that initialize chipset functions, but TianoCore is one of many payload options ofcoreboot. The development of coreboot requires cooperation from chipset manufacturers to provide the specifications needed to develop initialization drivers. In 2011, Microsoft announced that computers certified to run itsWindows 8operating system had to ship with Microsoft's public key enrolled and Secure Boot enabled, which implies that using UEFI is a requirement for these devices.[164][165]Following the announcement, the company was accused by critics and free software/open source advocates (including theFree Software Foundation) of trying to use the Secure Boot functionality of UEFI tohinder or outright preventthe installation of alternative operating systems such asLinux. Microsoft denied that the Secure Boot requirement was intended to serve as a form oflock-in, and clarified its requirements by stating that x86-based systems certified for Windows 8 must allow Secure Boot to enter custom mode or be disabled, but not on systems using theARM architecture.[73][166]Windows 10allowsOEMsto decide whether or not Secure Boot can be managed by users of their x86 systems.[167] Other developers raised concerns about the legal and practical issues of implementing support for Secure Boot on Linux systems in general. FormerRed HatdeveloperMatthew Garrettnoted that conditions in theGNU General Public License version 3may prevent the use of theGNU GRand Unified Bootloaderwithout a distribution's developer disclosing the private key (however, theFree Software Foundationhas since clarified its position, assuring that the responsibility to make keys available was held by the hardware manufacturer),[168][122]and that it would also be difficult for advanced users to build customkernelsthat could function with Secure Boot enabled without self-signing them.[166]Other developers suggested that signed builds of Linux with another key could be provided, but noted that it would be difficult to persuade OEMs to ship their computers with the required key alongside the Microsoft key.[6] Several major Linux distributions have developed different implementations for Secure Boot. Garrett himself developed a minimal bootloader known as a shim, which is a precompiled, signed bootloader that allows the user to individually trust keys provided by Linux distributions.[169]Ubuntu 12.10uses an older version of shim[which?]pre-configured for use withCanonical's own key that verifies only the bootloader and allows unsigned kernels to be loaded; developers believed that the practice of signing only the bootloader is more feasible, since a trusted kernel is effective at securing only theuser space, and not the pre-boot state for which Secure Boot is designed to add protection. That also allows users to build their own kernels and use customkernel modulesas well, without the need to reconfigure the system.[122][170][171]Canonical also maintains its own private key to sign installations of Ubuntu pre-loaded on certified OEM computers that run the operating system, and also plans to enforce a Secure Boot requirement as well—requiring both a Canonical key and a Microsoft key (for compatibility reasons) to be included in their firmware.Fedoraalso uses shim,[which?]but requires that both the kernel and its modules be signed as well.[170]shim has Machine Owner Key (MOK) that can be used to sign locally-compiled kernels and other software not signed by distribution maintainer.[172] It has been disputed whether the operating system kernel and its modules must be signed as well; while the UEFI specifications do not require it, Microsoft has asserted that their contractual requirements do, and that it reserves the right to revoke any certificates used to sign code that can be used to compromise the security of the system.[171]In Windows, if Secure Boot is enabled, all kernel drivers must be digitally signed; non-WHQL drivers may be refused to load. In February 2013, another Red Hat developer attempted to submit a patch to the Linux kernel that would allow it to parse Microsoft's authenticode signing using a masterX.509key embedded inPEfiles signed by Microsoft. However, the proposal was criticized by Linux creatorLinus Torvalds, who attacked Red Hat for supporting Microsoft's control over the Secure Boot infrastructure.[173] On 26 March 2013, theSpanishfree software development group Hispalinux filed a formal complaint with theEuropean Commission, contending that Microsoft's Secure Boot requirements on OEM systems were "obstructive" andanti-competitive.[174] At theBlack Hat conferencein August 2013, a group of security researchers presented a series of exploits in specific vendor implementations of UEFI that could be used to exploit Secure Boot.[175] In August 2016 it was reported that two security researchers had found the "golden key" security key Microsoft uses in signing operating systems.[176]Technically, no key was exposed, however, an exploitable binary signed by the key was. This allows any software to run as though it was genuinely signed by Microsoft and exposes the possibility ofrootkitandbootkitattacks. This also makes patching the fault impossible, since any patch can be replaced (downgraded) by the (signed) exploitable binary. Microsoft responded in a statement that the vulnerability only exists inARM architectureandWindows RTdevices, and has released two patches; however, the patches do not (and cannot) remove the vulnerability, which would require key replacements in end user firmware to fix.[citation needed] On March 1, 2023, researchers from ESET Cybersecurity Firm reported “The first in-the-wild UEFI bootkit bypassing UEFI Secure Boot” named ‘BlackLotus’ in their public analyses findings describing the theory behind its mechanics exploiting the patches that “do not (and cannot) remove the vulnerability”.[177][178] In August 2024, theWindows 11andWindows 10security updates applied the Secure Boot Advanced Targeting (SBAT) settings to device's UEFI NVRAM, which caused some Linux distributions to fail to load. SBAT is a protocol that supported in new versions ofWindows Boot Managerand shim, which refuse buggy or vulnerable intermediate bootloaders (usually older versions of Windows Boot Manager andGRUB) to load in the boot process. The change was reverted the next month.[179] ManyLinux distributionssupport UEFI Secure Boot as of January 2025[update], such asRHEL(RHEL 7 and later),CentOS(CentOS 7 and later[180]),Ubuntu,Fedora,Debian(Debian 10 and later[181]),OpenSUSE, andSUSE Linux Enterprise.[182] The increased prominence of UEFI firmware in devices has also led to a number of technical problems blamed on their respective implementations.[183] Following the release of Windows 8 in late 2012, it was discovered that certainLenovocomputer models with Secure Boot had firmware that was hardcoded to allow only executables named "Windows Boot Manager" or "Red Hat Enterprise Linux" to load, regardless of any other setting.[184]Other problems were encountered by severalToshibalaptop models with Secure Boot that were missing certain certificates required for its proper operation.[183] In January 2013, a bug surrounding the UEFI implementation on someSamsunglaptops was publicized, which caused them to bebrickedafter installing a Linux distribution in UEFI mode. While potential conflicts with a kernel module designed to access system features on Samsung laptops were initially blamed (also prompting kernel maintainers to disable the module on UEFI systems as a safety measure), Matthew Garrett discovered that the bug was actually triggered by storing too many UEFI variables to memory, and that the bug could also be triggered under Windows under certain conditions. In conclusion, he determined that the offending kernel module had caused kernel message dumps to be written to the firmware, thus triggering the bug.[54][185][186]
https://en.wikipedia.org/wiki/UEFI
Quantum neural networksarecomputational neural networkmodels which are based on the principles ofquantum mechanics. The first ideas on quantum neural computation were published independently in 1995 bySubhash Kakand Ron Chrisley,[1][2]engaging with the theory ofquantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classicalartificial neural networkmodels (which are widely used in machine learning for the important task of pattern recognition) with the advantages ofquantum informationin order to develop more efficient algorithms.[3][4][5]One important motivation for these investigations is the difficulty to train classical neural networks, especially inbig data applications. The hope is that features ofquantum computingsuch asquantum parallelismor the effects ofinterferenceandentanglementcan be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments. Most Quantum neural networks are developed asfeed-forwardnetworks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits.[6][7]The layers do not have to be of the same width, meaning they don't have to have the same number of qubits as the layer before or after it. This structure is trained on which path to take similar to classicalartificial neural networks. This is discussed in a lower section. Quantum neural networks refer to three different categories: Quantum computer with classical data, classical computer with quantum data, and quantum computer with quantum data.[6] Quantum neural network research is still in its infancy, and a conglomeration of proposals and ideas of varying scope and mathematical rigor have been put forward. Most of them are based on the idea of replacing classical binary orMcCulloch-Pitts neuronswith aqubit(which can be called a “quron”), resulting in neural units that can be in asuperpositionof the state ‘firing’ and ‘resting’. A lot of proposals attempt to find a quantum equivalent for theperceptronunit from which neural nets are constructed. A problem is that nonlinear activation functions do not immediately correspond to the mathematical structure of quantum theory, since a quantum evolution is described by linear operations and leads to probabilistic observation. Ideas to imitate the perceptron activation function with a quantum mechanical formalism reach from special measurements[8][9]to postulating non-linear quantum operators (a mathematical framework that is disputed).[10][11]A direct implementation of the activation function using thecircuit-based model of quantum computationhas recently been proposed by Schuld, Sinayskiy and Petruccione based on thequantum phase estimation algorithm.[12] At a larger scale, researchers have attempted to generalize neural networks to the quantum setting. One way of constructing a quantum neuron is to first generalise classical neurons and then generalising them further to make unitary gates. Interactions between neurons can be controlled quantumly, withunitarygates, or classically, viameasurementof the network states. This high-level theoretical technique can be applied broadly, by taking different types of networks and different implementations of quantum neurons, such asphotonicallyimplemented neurons[7][13]andquantum reservoir processor(quantum version ofreservoir computing).[14]Most learning algorithms follow the classical model of training an artificial neural network to learn the input-output function of a giventraining setand use classical feedback loops to update parameters of the quantum system until they converge to an optimal configuration. Learning as a parameter optimisation problem has also been approached by adiabatic models of quantum computing.[15] Quantum neural networks can be applied to algorithmic design: givenqubitswith tunable mutual interactions, one can attempt to learn interactions following the classicalbackpropagationrule from atraining setof desired input-output relations, taken to be the desired output algorithm's behavior.[16][17]The quantum network thus ‘learns’ an algorithm. The first quantum associative memory algorithm was introduced by Dan Ventura and Tony Martinez in 1999.[18]The authors do not attempt to translate the structure of artificial neural network models into quantum theory, but propose an algorithm for acircuit-based quantum computerthat simulatesassociative memory. The memory states (inHopfield neural networkssaved in the weights of the neural connections) are written into a superposition, and aGrover-like quantum search algorithmretrieves the memory state closest to a given input. As such, this is not a fully content-addressable memory, since only incomplete patterns can be retrieved. The first truly content-addressable quantum memory, which can retrieve patterns also from corrupted inputs, was proposed by Carlo A. Trugenberger.[19][20][21]Both memories can store an exponential (in terms of n qubits) number of patterns but can be used only once due to the no-cloning theorem and their destruction upon measurement. Trugenberger,[20]however, has shown that his probabilistic model of quantum associative memory can be efficiently implemented and re-used multiples times for any polynomial number of stored patterns, a large advantage with respect to classical associative memories. A substantial amount of interest has been given to a “quantum-inspired” model that uses ideas from quantum theory to implement a neural network based onfuzzy logic.[22] Quantum Neural Networks can be theoretically trained similarly to training classical/artificial neural networks. A key difference lies in communication between the layers of a neural networks. For classical neural networks, at the end of a given operation, the currentperceptroncopies its output to the next layer of perceptron(s) in the network. However, in a quantum neural network, where each perceptron is a qubit, this would violate theno-cloning theorem.[6][23]A proposed generalized solution to this is to replace the classicalfan-outmethod with an arbitraryunitarythat spreads out, but does not copy, the output of one qubit to the next layer of qubits. Using this fan-out Unitary (Uf{\displaystyle U_{f}}) with a dummy state qubit in a known state (Ex.|0⟩{\displaystyle |0\rangle }in thecomputational basis), also known as anAncilla bit, the information from the qubit can be transferred to the next layer of qubits.[7]This process adheres to the quantum operation requirement ofreversibility.[7][24] Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the Quantum neural network being discussed uses fan-out Unitary operators, and each operator only acts on its respective input, only two layers are used at any given time.[6]In other words, no Unitary operator is acting on the entire network at any given time, meaning the number of qubits required for a given step depends on the number of inputs in a given layer. Since Quantum Computers are notorious for their ability to run multiple iterations in a short period of time, the efficiency of a quantum neural network is solely dependent on the number of qubits in any given layer, and not on the depth of the network.[24] To determine the effectiveness of a neural network, a cost function is used, which essentially measures the proximity of the network's output to the expected or desired output. In a Classical Neural Network, the weights (w{\displaystyle w}) and biases (b{\displaystyle b}) at each step determine the outcome of the cost functionC(w,b){\displaystyle C(w,b)}.[6]When training a Classical Neural network, the weights and biases are adjusted after each iteration, and given equation 1 below, wherey(x){\displaystyle y(x)}is the desired output andaout(x){\displaystyle a^{\text{out}}(x)}is the actual output, the cost function is optimized whenC(w,b){\displaystyle C(w,b)}= 0. For a quantum neural network, the cost function is determined by measuring the fidelity of the outcome state (ρout{\displaystyle \rho ^{\text{out}}}) with the desired outcome state (ϕout{\displaystyle \phi ^{\text{out}}}), seen in Equation 2 below. In this case, the Unitary operators are adjusted after each iteration, and the cost function is optimized when C = 1.[6] Gradient descent is widely used and successful in classical algorithms. However, although the simplified structure is very similar to neural networks such as CNNs, QNNs perform much worse. Since the quantum space exponentially expands as the q-bit grows, the observations will concentrate around the mean value at an exponential rate, where also have exponentially small gradients.[26] This situation is known as Barren Plateaus, because most of the initial parameters are trapped on a "plateau" of almost zero gradient, which approximates random wandering[26]rather than gradient descent. This makes the model untrainable. In fact, not only QNN, but almost all deeper VQA algorithms have this problem. In the presentNISQ era, this is one of the problems that have to be solved if more applications are to be made of the various VQA algorithms, including QNN.
https://en.wikipedia.org/wiki/Quantum_neural_network
Theweighted arithmetic meanis similar to an ordinaryarithmetic mean(the most common type ofaverage), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role indescriptive statisticsand also occurs in a more general form in several other areas of mathematics. If all the weights are equal, then the weighted mean is the same as thearithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance inSimpson's paradox. Given two schoolclasses—onewith 20 students, one with 30students—andtest grades in each class as follows: The mean for the morning class is 80 and the mean of the afternoon class is 90. The unweighted mean of the two means is 85. However, this does not account for the difference in number of students in each class (20 versus 30); hence the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by averaging all the grades, without regard to classes (add all the grades up and divide by the total number of students):x¯=430050=86.{\displaystyle {\bar {x}}={\frac {4300}{50}}=86.} Or, this can be accomplished by weighting the class means by the number of students in each class. The larger class is given more "weight": Thus, the weighted mean makes it possible to find the mean average student grade without knowing each student's score. Only the class means and the number of students in each class are needed. Since only therelativeweights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such a linear combination is called aconvex combination. Using the previous example, we would get the following weights: Then, apply the weights like this: Formally, the weighted mean of a non-empty finitetupleof data(x1,x2,…,xn){\displaystyle \left(x_{1},x_{2},\dots ,x_{n}\right)}, with corresponding non-negativeweights(w1,w2,…,wn){\displaystyle \left(w_{1},w_{2},\dots ,w_{n}\right)}is which expands to: Therefore, data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights may not be negative in order for the equation to work[a]. Some may be zero, but not all of them (since division by zero is not allowed). The formulas are simplified when the weights are normalized such that they sum up to 1, i.e.,∑i=1nwi′=1{\textstyle \sum \limits _{i=1}^{n}{w_{i}'}=1}. For such normalized weights, the weighted mean is equivalently: One can always normalize the weights by making the following transformation on the original weights: Theordinary mean1n∑i=1nxi{\textstyle {\frac {1}{n}}\sum \limits _{i=1}^{n}{x_{i}}}is a special case of the weighted mean where all data have equal weights. If the data elements areindependent and identically distributed random variableswith varianceσ2{\displaystyle \sigma ^{2}}, thestandard error of the weighted mean,σx¯{\displaystyle \sigma _{\bar {x}}}, can be shown viauncertainty propagationto be: For the weighted mean of a list of data for which each elementxi{\displaystyle x_{i}}potentially comes from a differentprobability distributionwith knownvarianceσi2{\displaystyle \sigma _{i}^{2}}, all having the same mean, one possible choice for the weights is given by the reciprocal of variance: The weighted mean in this case is: and thestandard error of the weighted mean (with inverse-variance weights)is: Note this reduces toσx¯2=σ02/n{\displaystyle \sigma _{\bar {x}}^{2}=\sigma _{0}^{2}/n}when allσi=σ0{\displaystyle \sigma _{i}=\sigma _{0}}. It is a special case of the general formula in previous section, The equations above can be combined to obtain: The significance of this choice is that this weighted mean is themaximum likelihood estimatorof the mean of the probability distributions under the assumption that they are independent andnormally distributedwith the same mean. The weighted sample mean,x¯{\displaystyle {\bar {x}}}, is itself a random variable. Its expected value and standard deviation are related to the expected values and standard deviations of the observations, as follows. For simplicity, we assume normalized weights (weights summing to one). If the observations have expected valuesE(xi)=μi,{\displaystyle E(x_{i})={\mu _{i}},}then the weighted sample mean has expectationE(x¯)=∑i=1nwi′μi.{\displaystyle E({\bar {x}})=\sum _{i=1}^{n}{w_{i}'\mu _{i}}.}In particular, if the means are equal,μi=μ{\displaystyle \mu _{i}=\mu }, then the expectation of the weighted sample mean will be that value,E(x¯)=μ.{\displaystyle E({\bar {x}})=\mu .} When treating the weights as constants, and having a sample ofnobservations fromuncorrelatedrandom variables, all with the samevarianceandexpectation(as is the case fori.i.drandom variables), then the variance of the weighted mean can be estimated as the multiplication of the unweighted variance byKish's design effect(seeproof): Withσ^y2=∑i=1n(yi−y¯)2n−1{\displaystyle {\hat {\sigma }}_{y}^{2}={\frac {\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}}{n-1}}},w¯=∑i=1nwin{\displaystyle {\bar {w}}={\frac {\sum _{i=1}^{n}w_{i}}{n}}}, andw2¯=∑i=1nwi2n{\displaystyle {\overline {w^{2}}}={\frac {\sum _{i=1}^{n}w_{i}^{2}}{n}}} However, this estimation is rather limited due to the strong assumption about theyobservations. This has led to the development of alternative, more general, estimators. From amodel basedperspective, we are interested in estimating the variance of the weighted mean when the differentyi{\displaystyle y_{i}}are noti.i.drandom variables. An alternative perspective for this problem is that of some arbitrarysampling designof the data in which units areselected with unequal probabilities(with replacement).[1]: 306 InSurvey methodology, the population mean, of some quantity of interesty, is calculated by taking an estimation of the total ofyover all elements in the population (Yor sometimesT) and dividing it by the population size – either known (N{\displaystyle N}) or estimated (N^{\displaystyle {\hat {N}}}). In this context, each value ofyis considered constant, and the variability comes from the selection procedure. This in contrast to "model based" approaches in which the randomness is often described in the y values. Thesurvey samplingprocedure yields a series ofBernoulliindicator values (Ii{\displaystyle I_{i}}) that get 1 if some observationiis in the sample and 0 if it was not selected. This can occur with fixed sample size, or varied sample size sampling (e.g.:Poisson sampling). The probability of some element to be chosen, given a sample, is denoted asP(Ii=1∣Some sample of sizen)=πi{\displaystyle P(I_{i}=1\mid {\text{Some sample of size }}n)=\pi _{i}}, and the one-draw probability of selection isP(Ii=1|one sample draw)=pi≈πin{\displaystyle P(I_{i}=1|{\text{one sample draw}})=p_{i}\approx {\frac {\pi _{i}}{n}}}(If N is very large and eachpi{\displaystyle p_{i}}is very small). For the following derivation we'll assume that the probability of selecting each element is fully represented by these probabilities.[2]: 42, 43, 51I.e.: selecting some element will not influence the probability of drawing another element (this doesn't apply for things such ascluster samplingdesign). Since each element (yi{\displaystyle y_{i}}) is fixed, and the randomness comes from it being included in the sample or not (Ii{\displaystyle I_{i}}), we often talk about the multiplication of the two, which is a random variable. To avoid confusion in the following section, let's call this term:yi′=yiIi{\displaystyle y'_{i}=y_{i}I_{i}}. With the following expectancy:E[yi′]=yiE[Ii]=yiπi{\displaystyle E[y'_{i}]=y_{i}E[I_{i}]=y_{i}\pi _{i}}; and variance:V[yi′]=yi2V[Ii]=yi2πi(1−πi){\displaystyle V[y'_{i}]=y_{i}^{2}V[I_{i}]=y_{i}^{2}\pi _{i}(1-\pi _{i})}. When each element of the sample is inflated by the inverse of its selection probability, it is termed theπ{\displaystyle \pi }-expandedyvalues, i.e.:yˇi=yiπi{\displaystyle {\check {y}}_{i}={\frac {y_{i}}{\pi _{i}}}}. A related quantity isp{\displaystyle p}-expandedyvalues:yipi=nyˇi{\displaystyle {\frac {y_{i}}{p_{i}}}=n{\check {y}}_{i}}.[2]: 42, 43, 51, 52As above, we can add a tick mark if multiplying by the indicator function. I.e.:yˇi′=Iiyˇi=Iiyiπi{\displaystyle {\check {y}}'_{i}=I_{i}{\check {y}}_{i}={\frac {I_{i}y_{i}}{\pi _{i}}}} In thisdesign basedperspective, the weights, used in the numerator of the weighted mean, are obtained from taking the inverse of the selection probability (i.e.: the inflation factor). I.e.:wi=1πi≈1n×pi{\displaystyle w_{i}={\frac {1}{\pi _{i}}}\approx {\frac {1}{n\times p_{i}}}}. If the population sizeNis known we can estimate the population mean usingY¯^knownN=Y^pwrN≈∑i=1nwiyi′N{\displaystyle {\hat {\bar {Y}}}_{{\text{known }}N}={\frac {{\hat {Y}}_{pwr}}{N}}\approx {\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{N}}}. If thesampling designis one that results in a fixed sample sizen(such as inpps sampling), then the variance of this estimator is: The general formula can be developed like this: The population total is denoted asY=∑i=1Nyi{\displaystyle Y=\sum _{i=1}^{N}y_{i}}and it may be estimated by the (unbiased)Horvitz–Thompson estimator, also called theπ{\displaystyle \pi }-estimator. This estimator can be itself estimated using thepwr-estimator (i.e.:p{\displaystyle p}-expanded with replacement estimator, or "probability with replacement" estimator). With the above notation, it is:Y^pwr=1n∑i=1nyi′pi=∑i=1nyi′npi≈∑i=1nyi′πi=∑i=1nwiyi′{\displaystyle {\hat {Y}}_{pwr}={\frac {1}{n}}\sum _{i=1}^{n}{\frac {y'_{i}}{p_{i}}}=\sum _{i=1}^{n}{\frac {y'_{i}}{np_{i}}}\approx \sum _{i=1}^{n}{\frac {y'_{i}}{\pi _{i}}}=\sum _{i=1}^{n}w_{i}y'_{i}}.[2]: 51 The estimated variance of thepwr-estimator is given by:[2]: 52Var⁡(Y^pwr)=nn−1∑i=1n(wiyi−wy¯)2{\displaystyle \operatorname {Var} ({\hat {Y}}_{pwr})={\frac {n}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}}wherewy¯=∑i=1nwiyin{\displaystyle {\overline {wy}}=\sum _{i=1}^{n}{\frac {w_{i}y_{i}}{n}}}. The above formula was taken from Sarndal et al. (1992) (also presented in Cochran 1977), but was written differently.[2]: 52[1]: 307 (11.35)The left side is how the variance was written and the right side is how we've developed the weighted version: Var⁡(Y^pwr)=1n1n−1∑i=1n(yipi−Y^pwr)2=1n1n−1∑i=1n(nnyipi−nn∑i=1nwiyi)2=1n1n−1∑i=1n(nyiπi−n∑i=1nwiyin)2=n2n1n−1∑i=1n(wiyi−wy¯)2=nn−1∑i=1n(wiyi−wy¯)2{\displaystyle {\begin{aligned}\operatorname {Var} ({\hat {Y}}_{\text{pwr}})&={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left({\frac {y_{i}}{p_{i}}}-{\hat {Y}}_{pwr}\right)^{2}\\&={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left({\frac {n}{n}}{\frac {y_{i}}{p_{i}}}-{\frac {n}{n}}\sum _{i=1}^{n}w_{i}y_{i}\right)^{2}={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left(n{\frac {y_{i}}{\pi _{i}}}-n{\frac {\sum _{i=1}^{n}w_{i}y_{i}}{n}}\right)^{2}\\&={\frac {n^{2}}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}\\&={\frac {n}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}\end{aligned}}} And we got to the formula from above. An alternative term, for when the sampling has a random sample size (as inPoisson sampling), is presented in Sarndal et al. (1992) as:[2]: 182 Var⁡(Y¯^pwr (knownN))=1N2∑i=1n∑j=1n(Δˇijyˇiyˇj){\displaystyle \operatorname {Var} ({\hat {\bar {Y}}}_{{\text{pwr (known }}N{\text{)}}})={\frac {1}{N^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\check {y}}_{i}{\check {y}}_{j}\right)} Withyˇi=yiπi{\displaystyle {\check {y}}_{i}={\frac {y_{i}}{\pi _{i}}}}. Also,C(Ii,Ij)=πij−πiπj=Δij{\displaystyle C(I_{i},I_{j})=\pi _{ij}-\pi _{i}\pi _{j}=\Delta _{ij}}whereπij{\displaystyle \pi _{ij}}is the probability of selecting both i and j.[2]: 36AndΔˇij=1−πiπjπij{\displaystyle {\check {\Delta }}_{ij}=1-{\frac {\pi _{i}\pi _{j}}{\pi _{ij}}}}, and for i=j:Δˇii=1−πiπiπi=1−πi{\displaystyle {\check {\Delta }}_{ii}=1-{\frac {\pi _{i}\pi _{i}}{\pi _{i}}}=1-\pi _{i}}.[2]: 43 If the selection probability are uncorrelated (i.e.:∀i≠j:C(Ii,Ij)=0{\displaystyle \forall i\neq j:C(I_{i},I_{j})=0}), and when assuming the probability of each element is very small, then: We assume that(1−πi)≈1{\displaystyle (1-\pi _{i})\approx 1}and thatVar⁡(Y^pwr (knownN))=1N2∑i=1n∑j=1n(Δˇijyˇiyˇj)=1N2∑i=1n(Δˇiiyˇiyˇi)=1N2∑i=1n((1−πi)yiπiyiπi)=1N2∑i=1n(wiyi)2{\displaystyle {\begin{aligned}\operatorname {Var} ({\hat {Y}}_{{\text{pwr (known }}N{\text{)}}})&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\check {y}}_{i}{\check {y}}_{j}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left({\check {\Delta }}_{ii}{\check {y}}_{i}{\check {y}}_{i}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left((1-\pi _{i}){\frac {y_{i}}{\pi _{i}}}{\frac {y_{i}}{\pi _{i}}}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left(w_{i}y_{i}\right)^{2}\end{aligned}}} The previous section dealt with estimating the population mean as a ratio of an estimated population total (Y^{\displaystyle {\hat {Y}}}) with a known population size (N{\displaystyle N}), and the variance was estimated in that context. Another common case is that the population size itself (N{\displaystyle N}) is unknown and is estimated using the sample (i.e.:N^{\displaystyle {\hat {N}}}). The estimation ofN{\displaystyle N}can be described as the sum of weights. So whenwi=1πi{\displaystyle w_{i}={\frac {1}{\pi _{i}}}}we getN^=∑i=1nwiIi=∑i=1nIiπi=∑i=1n1ˇi′{\displaystyle {\hat {N}}=\sum _{i=1}^{n}w_{i}I_{i}=\sum _{i=1}^{n}{\frac {I_{i}}{\pi _{i}}}=\sum _{i=1}^{n}{\check {1}}'_{i}}. With the above notation, the parameter we care about is the ratio of the sums ofyi{\displaystyle y_{i}}s, and 1s. I.e.:R=Y¯=∑i=1Nyiπi∑i=1N1πi=∑i=1Nyˇi∑i=1N1ˇi=∑i=1Nwiyi∑i=1Nwi{\displaystyle R={\bar {Y}}={\frac {\sum _{i=1}^{N}{\frac {y_{i}}{\pi _{i}}}}{\sum _{i=1}^{N}{\frac {1}{\pi _{i}}}}}={\frac {\sum _{i=1}^{N}{\check {y}}_{i}}{\sum _{i=1}^{N}{\check {1}}_{i}}}={\frac {\sum _{i=1}^{N}w_{i}y_{i}}{\sum _{i=1}^{N}w_{i}}}}. We can estimate it using our sample with:R^=Y¯^=∑i=1NIiyiπi∑i=1NIi1πi=∑i=1Nyˇi′∑i=1N1ˇi′=∑i=1Nwiyi′∑i=1Nwi1i′=∑i=1nwiyi′∑i=1nwi1i′=y¯w{\displaystyle {\hat {R}}={\hat {\bar {Y}}}={\frac {\sum _{i=1}^{N}I_{i}{\frac {y_{i}}{\pi _{i}}}}{\sum _{i=1}^{N}I_{i}{\frac {1}{\pi _{i}}}}}={\frac {\sum _{i=1}^{N}{\check {y}}'_{i}}{\sum _{i=1}^{N}{\check {1}}'_{i}}}={\frac {\sum _{i=1}^{N}w_{i}y'_{i}}{\sum _{i=1}^{N}w_{i}1'_{i}}}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}1'_{i}}}={\bar {y}}_{w}}. As we moved from using N to using n, we actually know that all the indicator variables get 1, so we could simply write:y¯w=∑i=1nwiyi∑i=1nwi{\displaystyle {\bar {y}}_{w}={\frac {\sum _{i=1}^{n}w_{i}y_{i}}{\sum _{i=1}^{n}w_{i}}}}. This will be theestimandfor specific values of y and w, but the statistical properties comes when including the indicator variabley¯w=∑i=1nwiyi′∑i=1nwi1i′{\displaystyle {\bar {y}}_{w}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}1'_{i}}}}.[2]: 162, 163, 176 This is called aRatio estimatorand it is approximately unbiased forR.[2]: 182 In this case, the variability of theratiodepends on the variability of the random variables both in the numerator and the denominator - as well as their correlation. Since there is no closed analytical form to compute this variance, various methods are used for approximate estimation. PrimarilyTaylor seriesfirst-order linearization, asymptotics, and bootstrap/jackknife.[2]: 172The Taylor linearization method could lead to under-estimation of the variance for small sample sizes in general, but that depends on the complexity of the statistic. For the weighted mean, the approximate variance is supposed to be relatively accurate even for medium sample sizes.[2]: 176For when the sampling has a random sample size (as inPoisson sampling), it is as follows:[2]: 182 Ifπi≈pin{\displaystyle \pi _{i}\approx p_{i}n}, then either usingwi=1πi{\displaystyle w_{i}={\frac {1}{\pi _{i}}}}orwi=1pi{\displaystyle w_{i}={\frac {1}{p_{i}}}}would give the same estimator, since multiplyingwi{\displaystyle w_{i}}by some factor would lead to the same estimator. It also means that if we scale the sum of weights to be equal to a known-from-before population sizeN, the variance calculation would look the same. When all weights are equal to one another, this formula is reduced to the standard unbiased variance estimator. The Taylor linearization states that for a general ratio estimator of two sums (R^=Y^Z^{\displaystyle {\hat {R}}={\frac {\hat {Y}}{\hat {Z}}}}), they can be expanded around the true value R, and give:[2]: 178 R^=Y^Z^=∑i=1nwiyi′∑i=1nwizi′≈R+1Z∑i=1n(yi′πi−Rzi′πi){\displaystyle {\hat {R}}={\frac {\hat {Y}}{\hat {Z}}}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}z'_{i}}}\approx R+{\frac {1}{Z}}\sum _{i=1}^{n}\left({\frac {y'_{i}}{\pi _{i}}}-R{\frac {z'_{i}}{\pi _{i}}}\right)} And the variance can be approximated by:[2]: 178, 179 V(R^)^=1Z^2∑i=1n∑j=1n(Δˇijyi−R^ziπiyj−R^zjπj)=1Z^2[V(Y^)^+R^V(Z^)^−2R^C^(Y^,Z^)]{\displaystyle {\widehat {V({\hat {R}})}}={\frac {1}{{\hat {Z}}^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\frac {y_{i}-{\hat {R}}z_{i}}{\pi _{i}}}{\frac {y_{j}-{\hat {R}}z_{j}}{\pi _{j}}}\right)={\frac {1}{{\hat {Z}}^{2}}}\left[{\widehat {V({\hat {Y}})}}+{\hat {R}}{\widehat {V({\hat {Z}})}}-2{\hat {R}}{\hat {C}}({\hat {Y}},{\hat {Z}})\right]}. The termC^(Y^,Z^){\displaystyle {\hat {C}}({\hat {Y}},{\hat {Z}})}is the estimated covariance between the estimated sum of Y and estimated sum of Z. Since this is thecovariance of two sums of random variables, it would include many combinations of covariances that will depend on the indicator variables. If the selection probability are uncorrelated (i.e.:∀i≠j:Δij=C(Ii,Ij)=0{\displaystyle \forall i\neq j:\Delta _{ij}=C(I_{i},I_{j})=0}), this term would still include a summation ofncovariances for each elementibetweenyi′=Iiyi{\displaystyle y'_{i}=I_{i}y_{i}}andzi′=Iizi{\displaystyle z'_{i}=I_{i}z_{i}}. This helps illustrate that this formula incorporates the effect of correlation between y and z on the variance of the ratio estimators. When definingzi=1{\displaystyle z_{i}=1}the above becomes:[2]: 182 V(R^)^=V(y¯w)^=1N^2∑i=1n∑j=1n(Δˇijyi−y¯wπiyj−y¯wπj).{\displaystyle {\widehat {V({\hat {R}})}}={\widehat {V({\bar {y}}_{w})}}={\frac {1}{{\hat {N}}^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\frac {y_{i}-{\bar {y}}_{w}}{\pi _{i}}}{\frac {y_{j}-{\bar {y}}_{w}}{\pi _{j}}}\right).} If the selection probability are uncorrelated (i.e.:∀i≠j:Δij=C(Ii,Ij)=0{\displaystyle \forall i\neq j:\Delta _{ij}=C(I_{i},I_{j})=0}), and when assuming the probability of each element is very small (i.e.:(1−πi)≈1{\displaystyle (1-\pi _{i})\approx 1}), then the above reduced to the following:V(y¯w)^=1N^2∑i=1n((1−πi)yi−y¯wπi)2=1(∑i=1nwi)2∑i=1nwi2(yi−y¯w)2.{\displaystyle {\widehat {V({\bar {y}}_{w})}}={\frac {1}{{\hat {N}}^{2}}}\sum _{i=1}^{n}\left((1-\pi _{i}){\frac {y_{i}-{\bar {y}}_{w}}{\pi _{i}}}\right)^{2}={\frac {1}{(\sum _{i=1}^{n}w_{i})^{2}}}\sum _{i=1}^{n}w_{i}^{2}(y_{i}-{\bar {y}}_{w})^{2}.} A similar re-creation of the proof (up to some mistakes at the end) was provided by Thomas Lumley in crossvalidated.[3] We have (at least) two versions of variance for the weighted mean: one with known and one with unknown population size estimation. There is no uniformly better approach, but the literature presents several arguments to prefer using the population estimation version (even when the population size is known).[2]: 188For example: if all y values are constant, the estimator with unknown population size will give the correct result, while the one with known population size will have some variability. Also, when the sample size itself is random (e.g.: inPoisson sampling), the version with unknown population mean is considered more stable. Lastly, if the proportion of sampling is negatively correlated with the values (i.e.: smaller chance to sample an observation that is large), then the un-known population size version slightly compensates for that. For the trivial case in which all the weights are equal to 1, the above formula is just like the regular formula for the variance of the mean (but notice that it uses the maximum likelihood estimator for the variance instead of the unbiased variance. I.e.: dividing it by n instead of (n-1)). It has been shown, by Gatz et al. (1995), that in comparison tobootstrappingmethods, the following (variance estimation of ratio-mean usingTaylor serieslinearization) is a reasonable estimation for the square of the standard error of the mean (when used in the context of measuring chemical constituents):[4]: 1186 wherew¯=∑win{\displaystyle {\bar {w}}={\frac {\sum w_{i}}{n}}}. Further simplification leads to Gatz et al. mention that the above formulation was published by Endlich et al. (1988) when treating the weighted mean as a combination of a weighted total estimator divided by an estimator of the population size,[5]based on the formulation published by Cochran (1977), as an approximation to the ratio mean. However, Endlich et al. didn't seem to publish this derivation in their paper (even though they mention they used it), and Cochran's book includes a slightly different formulation.[1]: 155Still, it's almost identical to the formulations described in previous sections. Because there is no closed analytical form for the variance of the weighted mean, it was proposed in the literature to rely on replication methods such as theJackknifeandBootstrapping.[1]: 321 For uncorrelated observations with variancesσi2{\displaystyle \sigma _{i}^{2}}, the variance of the weighted sample mean is[citation needed] whose square rootσx¯{\displaystyle \sigma _{\bar {x}}}can be called thestandard error of the weighted mean (general case).[citation needed] Consequently, if all the observations have equal variance,σi2=σ02{\displaystyle \sigma _{i}^{2}=\sigma _{0}^{2}}, the weighted sample mean will have variance where1/n≤∑i=1nwi′2≤1{\textstyle 1/n\leq \sum _{i=1}^{n}{w_{i}'^{2}}\leq 1}. The variance attains its maximum value,σ02{\displaystyle \sigma _{0}^{2}}, when all weights except one are zero. Its minimum value is found when all weights are equal (i.e., unweighted mean), in which case we haveσx¯=σ0/n{\textstyle \sigma _{\bar {x}}=\sigma _{0}/{\sqrt {n}}}, i.e., it degenerates into thestandard error of the mean, squared. Because one can always transform non-normalized weights to normalized weights, all formulas in this section can be adapted to non-normalized weights by replacing allwi′=wi∑i=1nwi{\displaystyle w_{i}'={\frac {w_{i}}{\sum _{i=1}^{n}{w_{i}}}}}. Typically when a mean is calculated it is important to know thevarianceandstandard deviationabout that mean. When a weighted meanμ∗{\displaystyle \mu ^{*}}is used, the variance of the weighted sample is different from the variance of the unweighted sample. Thebiasedweightedsample varianceσ^w2{\displaystyle {\hat {\sigma }}_{\mathrm {w} }^{2}}is defined similarly to the normalbiasedsample varianceσ^2{\displaystyle {\hat {\sigma }}^{2}}: where∑i=1Nwi=1{\displaystyle \sum _{i=1}^{N}w_{i}=1}for normalized weights. If the weights arefrequency weights(and thus are random variables), it can be shown[citation needed]thatσ^w2{\displaystyle {\hat {\sigma }}_{\mathrm {w} }^{2}}is the maximum likelihood estimator ofσ2{\displaystyle \sigma ^{2}}foriidGaussian observations. For small samples, it is customary to use anunbiased estimatorfor the population variance. In normal unweighted samples, theNin the denominator (corresponding to the sample size) is changed toN− 1 (seeBessel's correction). In the weighted setting, there are actually two different unbiased estimators, one for the case offrequency weightsand another for the case ofreliability weights. If the weights arefrequency weights(where a weight equals the number of occurrences), then the unbiased estimator is: This effectively applies Bessel's correction for frequency weights. For example, if values{2,2,4,5,5,5}{\displaystyle \{2,2,4,5,5,5\}}are drawn from the same distribution, then we can treat this set as an unweighted sample, or we can treat it as the weighted sample{2,4,5}{\displaystyle \{2,4,5\}}with corresponding weights{2,1,3}{\displaystyle \{2,1,3\}}, and we get the same result either way. If the frequency weights{wi}{\displaystyle \{w_{i}\}}are normalized to 1, then the correct expression after Bessel's correction becomes where the total number of samples is∑i=1Nwi{\displaystyle \sum _{i=1}^{N}w_{i}}(notN{\displaystyle N}). In any case, the information on total number of samples is necessary in order to obtain an unbiased correction, even ifwi{\displaystyle w_{i}}has a different meaning other than frequency weight. The estimator can be unbiased only if the weights are notstandardizednornormalized, these processes changing the data's mean and variance and thus leading to aloss of the base rate(the population count, which is a requirement for Bessel's correction). If the weights are insteadreliability weights(non-random values reflecting the sample's relative trustworthiness, often derived from sample variance), we can determine a correction factor to yield an unbiased estimator. Assuming each random variable is sampled from the same distribution with meanμ{\displaystyle \mu }and actual varianceσactual2{\displaystyle \sigma _{\text{actual}}^{2}}, taking expectations we have, whereV1=∑i=1Nwi{\displaystyle V_{1}=\sum _{i=1}^{N}w_{i}}andV2=∑i=1Nwi2{\displaystyle V_{2}=\sum _{i=1}^{N}w_{i}^{2}}. Therefore, the bias in our estimator is(1−V2V12){\displaystyle \left(1-{\frac {V_{2}}{V_{1}^{2}}}\right)}, analogous to the(N−1N){\displaystyle \left({\frac {N-1}{N}}\right)}bias in the unweighted estimator (also notice thatV12/V2=Neff{\displaystyle \ V_{1}^{2}/V_{2}=N_{eff}}is theeffective sample size). This means that to unbias our estimator we need to pre-divide by1−(V2/V12){\displaystyle 1-\left(V_{2}/V_{1}^{2}\right)}, ensuring that the expected value of the estimated variance equals the actual variance of the sampling distribution. The final unbiased estimate of sample variance is: whereE⁡[sw2]=σactual2{\displaystyle \operatorname {E} [s_{\mathrm {w} }^{2}]=\sigma _{\text{actual}}^{2}}. The degrees of freedom of this weighted, unbiased sample variance vary accordingly fromN− 1 down to 0. The standard deviation is simply the square root of the variance above. As a side note, other approaches have been described to compute the weighted sample variance.[7] In a weighted sample, each row vectorxi{\displaystyle \mathbf {x} _{i}}(each set of single observations on each of theKrandom variables) is assigned a weightwi≥0{\displaystyle w_{i}\geq 0}. Then theweighted meanvectorμ∗{\displaystyle \mathbf {\mu ^{*}} }is given by And the weighted covariance matrix is given by:[8] Similarly to weighted sample variance, there are two different unbiased estimators depending on the type of the weights. If the weights arefrequency weights, theunbiasedweighted estimate of the covariance matrixC{\displaystyle \textstyle \mathbf {C} }, with Bessel's correction, is given by:[8] This estimator can be unbiased only if the weights are notstandardizednornormalized, these processes changing the data's mean and variance and thus leading to aloss of the base rate(the population count, which is a requirement for Bessel's correction). In the case ofreliability weights, the weights arenormalized: (If they are not, divide the weights by their sum to normalize prior to calculatingV1{\displaystyle V_{1}}: Then theweighted meanvectorμ∗{\displaystyle \mathbf {\mu ^{*}} }can be simplified to and theunbiasedweighted estimate of the covariance matrixC{\displaystyle \mathbf {C} }is:[9] The reasoning here is the same as in the previous section. Since we are assuming the weights are normalized, thenV1=1{\displaystyle V_{1}=1}and this reduces to: If all weights are the same, i.e.wi/V1=1/N{\displaystyle w_{i}/V_{1}=1/N}, then the weighted mean and covariance reduce to the unweighted sample mean and covariance above. The above generalizes easily to the case of taking the mean of vector-valued estimates. For example, estimates of position on a plane may have less certainty in one direction than another. As in the scalar case, the weighted mean of multiple estimates can provide amaximum likelihoodestimate. We simply replace the varianceσ2{\displaystyle \sigma ^{2}}by thecovariance matrixC{\displaystyle \mathbf {C} }and thearithmetic inverseby thematrix inverse(both denoted in the same way, via superscripts); the weight matrix then reads:[10] Wi=Ci−1.{\displaystyle \mathbf {W} _{i}=\mathbf {C} _{i}^{-1}.} The weighted mean in this case is:x¯=Cx¯(∑i=1nWixi),{\displaystyle {\bar {\mathbf {x} }}=\mathbf {C} _{\bar {\mathbf {x} }}\left(\sum _{i=1}^{n}\mathbf {W} _{i}\mathbf {x} _{i}\right),}(where the order of thematrix–vector productis notcommutative), in terms of the covariance of the weighted mean:Cx¯=(∑i=1nWi)−1,{\displaystyle \mathbf {C} _{\bar {\mathbf {x} }}=\left(\sum _{i=1}^{n}\mathbf {W} _{i}\right)^{-1},} For example, consider the weighted mean of the point [1 0] with high variance in the second component and [0 1] with high variance in the first component. Then then the weighted mean is: which makes sense: the [1 0] estimate is "compliant" in the second component and the [0 1] estimate is compliant in the first component, so the weighted mean is nearly [1 1]. In the general case, suppose thatX=[x1,…,xn]T{\displaystyle \mathbf {X} =[x_{1},\dots ,x_{n}]^{T}},C{\displaystyle \mathbf {C} }is thecovariance matrixrelating the quantitiesxi{\displaystyle x_{i}},x¯{\displaystyle {\bar {x}}}is the common mean to be estimated, andJ{\displaystyle \mathbf {J} }is adesign matrixequal to avector of ones[1,…,1]T{\displaystyle [1,\dots ,1]^{T}}(of lengthn{\displaystyle n}). TheGauss–Markov theoremstates that the estimate of the mean having minimum variance is given by: and where: Consider the time series of an independent variablex{\displaystyle x}and a dependent variabley{\displaystyle y}, withn{\displaystyle n}observations sampled at discrete timesti{\displaystyle t_{i}}. In many common situations, the value ofy{\displaystyle y}at timeti{\displaystyle t_{i}}depends not only onxi{\displaystyle x_{i}}but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding meanz{\displaystyle z}for a window sizem{\displaystyle m}. In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction0<Δ<1{\displaystyle 0<\Delta <1}at each time step. Settingw=1−Δ{\displaystyle w=1-\Delta }we can definem{\displaystyle m}normalized weights by whereV1{\displaystyle V_{1}}is the sum of the unnormalized weights. In this caseV1{\displaystyle V_{1}}is simply approachingV1=1/(1−w){\displaystyle V_{1}=1/(1-w)}for large values ofm{\displaystyle m}. The damping constantw{\displaystyle w}must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step(1−w)−1{\displaystyle (1-w)^{-1}}, the weight approximately equalse−1(1−w)=0.39(1−w){\displaystyle {e^{-1}}(1-w)=0.39(1-w)}, the tail area the valuee−1{\displaystyle e^{-1}}, the head area1−e−1=0.61{\displaystyle {1-e^{-1}}=0.61}. The tail area at stepn{\displaystyle n}is≤e−n(1−w){\displaystyle \leq {e^{-n(1-w)}}}. Where primarily the closestn{\displaystyle n}observations matter and the effect of the remaining observations can be ignored safely, then choosew{\displaystyle w}such that the tail area is sufficiently small. The concept of weighted average can be extended to functions.[11]Weighted averages of functions play an important role in the systems of weighted differential and integral calculus.[12] Weighted means are typically used to find the weighted mean of historical data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact thatχ2{\displaystyle \chi ^{2}}is too large. The correction that must be made is whereχν2{\displaystyle \chi _{\nu }^{2}}is thereduced chi-squared: The square rootσ^x¯{\displaystyle {\hat {\sigma }}_{\bar {x}}}can be called thestandard error of the weighted mean (variance weights, scale corrected). When all data variances are equal,σi=σ0{\displaystyle \sigma _{i}=\sigma _{0}}, they cancel out in the weighted mean variance,σx¯2{\displaystyle \sigma _{\bar {x}}^{2}}, which again reduces to thestandard error of the mean(squared),σx¯2=σ2/n{\displaystyle \sigma _{\bar {x}}^{2}=\sigma ^{2}/n}, formulated in terms of thesample standard deviation(squared),
https://en.wikipedia.org/wiki/Weighted_mean
Aspoken dialog system(SDS) is a computer system able to converse with a human with voice. It has two essential components that do not exist in a written textdialog system: aspeech recognizerand atext-to-speechmodule (written text dialog systems usually use other input systems provided by an OS). It can be further distinguished fromcommand and controlspeech systems that can respond to requests but do not attempt to maintain continuity over time. Spoken dialog systems vary in their complexity. Directed dialog systems are very simple and require that the developer create a graph (typically a tree) that manages the task but may not correspond to the needs of the user. Information access systems, typically based on forms, allow users some flexibility (for example in the order in which retrieval constraints are specified, or in the use of optional constraints) but are limited in their capabilities. Problem-solving dialog systems may allow human users to engage in a number of different activities that may include information access, plan construction and possible execution of the latter. Some examples of systems include: Pionieers in dialogue systems are companies likeAT&T(with its speech recognizer system in the Seventies) andCSELTlaboratories, that led some European research projects during the Eighties (e.g. SUNDIAL) after the end of the DARPA project in the US. The field of spoken dialog systems is quite large and includes research (featured at scientific conferences such asSIGdialandInterspeech) and a large industrial sector (with its own meetings such asSpeechTekandAVIOS). The following might provide good technical introductions:
https://en.wikipedia.org/wiki/Spoken_dialogue_systems
Simply stated,post-modern portfolio theory(PMPT) is an extension of the traditionalmodern portfolio theory(MPT) of Markowitz and Sharpe. Both theories provide analytical methods for rational investors to use diversification to optimize their investment portfolios. The essential difference between PMPT and MPT is that PMPT emphasizes the return thatmustbe earned on an investment in order to meet future, specified obligations, MPT is concerned only with the absolute return vis-a-vis the risk-free rate. The earliest published literature under the PMPT rubric was published by the principals of software developer Investment Technologies, LLC, Brian M. Rom and Kathleen W. Ferguson, in the Winter, 1993 and Fall, 1994 editions ofThe Journal of Investing. However, while the software tools resulting from the application of PMPT were innovations for practitioners, many of the ideas and concepts embodied in these applications had long and distinguished provenance in academic and research institutions worldwide. Empirical investigations began in 1981 at the Pension Research Institute (PRI) atSan Francisco State University. Dr. Hal Forsey and Dr. Frank Sortino were trying to apply Peter Fishburn's theory published in 1977 to Pension Fund Management. The result was an asset allocation model that PRI licensed Brian Rom to market in 1988. Mr. Rom coined the term PMPT and began using it to market portfolio optimization and performance measurement software developed by his company. These systems were built on the PRI downside- risk algorithms. Sortino and Steven Satchell at Cambridge University co-authored the first book on PMPT. This was intended as a graduate seminar text in portfolio management. A more recent book by Sortino was written for practitioners. The first publication in a major journal was co-authored by Sortino and Dr. Robert van der Meer, then at Shell Oil Netherlands. These concepts were popularized by articles and conference presentations by Sortino, Rom and others, including members of the now-defunct Salomon Bros.Skunk Works. Sortino claims the major contributors to the underlying theory are: Harry Markowitzlaid the foundations of MPT, the greatest contribution of which is[citation needed]the establishment of a formal risk/return framework for investment decision-making; seeMarkowitz model. By defining investment risk in quantitative terms, Markowitz gave investors a mathematical approach to asset-selection andportfolio management. But there are important limitations to the original MPT formulation. Two major limitations of MPT are its assumptions that: Stated another way, MPT is limited by measures of risk and return that do not always represent the realities of the investment markets. The assumption of a normal distribution is a major practical limitation, because it is symmetrical. Using the variance (or its square root, the standard deviation) implies that uncertainty about better-than-expected returns is equally averred as uncertainty about returns that are worse than expected. Furthermore, using the normal distribution to model the pattern of investment returns makes investment results with more upside than downside returns appear more risky than they really are. The converse distortion applies to distributions with a predominance of downside returns. The result is that using traditional MPT techniques for measuring investment portfolio construction and evaluation frequently does not accurately model investment reality. It has long been recognized that investors typically do not view as risky those returnsabovethe minimum they must earn in order to achieve their investment objectives. They believe that risk has to do with the bad outcomes (i.e., returns below a required target), not the good outcomes (i.e., returns in excess of the target) and that losses weigh more heavily than gains. This view has been noted by researchers in finance, economics and psychology, including Sharpe (1964). "Under certain conditions the MVA can be shown to lead to unsatisfactory predictions of (investor) behavior. Markowitz suggests that a model based on thesemivariancewould be preferable; in light of the formidablecomputational problems, however, he bases his (MV) analysis on the mean and the standard deviation.[2]" Recent advances in portfolio and financial theory, coupled with increased computing power, have also contributed to overcoming these limitations. In 1987, the Pension Research Institute at San Francisco State University developed the practical mathematical algorithms of PMPT that are in use today. These methods provide a framework that recognizes investors' preferences for upside over downsidevolatility. At the same time, a more robust model for the pattern of investment returns, the three-parameterlognormal distribution,[3]was introduced. Downside risk (DR) is measured by target semi-deviation (the square root of target semivariance) and is termed downside deviation. It is expressed in percentages and therefore allows for rankings in the same way asstandard deviation. An intuitive way to view downside risk is the annualized standard deviation of returns below the target. Another is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures quadratically. This is consistent with observations made on the behavior of individual decision-making under where d= downside deviation (commonly known in the financial community as 'downside risk'). Note: By extension,d² = downside variance. t= the annual target return, originally termed the minimum acceptable return, or MAR. r= the random variable representing the return for the distribution of annual returnsf(r), f(r) = thedistributionfor the annual returns, e.g. the three-parameter lognormal distribution For the reasons provided below, thiscontinuousformula is preferred over a simplerdiscreteversion that determines the standard deviation of below-target periodic returns taken from the return series. 1. The continuous form permits all subsequent calculations to be made using annual returns which is the natural way for investors to specify their investment goals. The discrete form requires monthly returns for there to be sufficient data points to make a meaningful calculation, which in turn requires converting the annual target into a monthly target. This significantly affects the amount of risk that is identified. For example, a goal of earning 1% in every month of one year results in a greater risk than the seemingly equivalent goal of earning 12% in one year. 2. A second reason for strongly preferring the continuous form to the discrete form has been proposed by Sortino & Forsey (1996): "Before we make an investment, we don't know what the outcome will be... After the investment is made, and we want to measure its performance, all we know is what the outcome was, not what it could have been. To cope with this uncertainty, we assume that a reasonable estimate of the range of possible returns, as well as the probabilities associated with estimation of those returns...In statistical terms, the shape of [this] uncertainty is called a probability distribution. In other words, looking at just the discrete monthly or annual values does not tell the whole story." Using the observed points to create a distribution is a staple of conventional performance measurement. For example, monthly returns are used to calculate a fund's mean and standard deviation. Using these values and the properties of the normal distribution, we can make statements such as the likelihood of losing money (even though no negative returns may actually have been observed), or the range within which two-thirds of all returns lies (even though the specific returns identifying this range have not necessarily occurred). Our ability to make these statements comes from the process of assuming the continuous form of the normal distribution and certain of its well-known properties. In PMPT an analogous process is followed: TheSortino ratio, developed in 1993 by Rom's company, Investment Technologies, LLC, was the first new element in the PMPT rubric. It is defined as: where r= the annualized rate of return, t= the target return, d= downside risk. The following table shows that this ratio is demonstrably superior to the traditionalSharpe ratioas a means for ranking investment results. The table shows risk-adjusted ratios for several major indexes using both Sortino and Sharpe ratios. The data cover the five years 1992-1996 and are based on monthly total returns. The Sortino ratio is calculated against a 9.0% target. As an example of the different conclusions that can be drawn using these two ratios, notice how the Lehman Aggregate and MSCI EAFE compare - the Lehman ranks higher using the Sharpe ratio whereas EAFE ranks higher using the Sortino ratio. In many cases, manager or index rankings will be different, depending on the risk-adjusted measure used. These patterns will change again for different values of t. For example, when t is close to the risk-free rate, the Sortino Ratio for T-Bill's will be higher than that for the S&P 500, while the Sharpe ratio remains unchanged. In March 2008, researchers at the Queensland Investment Corporation andQueensland University of Technologyshowed that for skewed return distributions, the Sortino ratio is superior to the Sharpe ratio as a measure of portfolio risk.[4] Volatility skewness is the second portfolio-analysis statistic introduced by Rom and Ferguson under the PMPT rubric. It measures the ratio of a distribution's percentage of total variance from returns above the mean, to the percentage of the distribution's total variance from returns below the mean. Thus, if a distribution is symmetrical ( as in the normal case, as is assumed under MPT), it has a volatility skewness of 1.00. Values greater than 1.00 indicate positive skewness; values less than 1.00 indicate negative skewness. While closely correlated with the traditional statistical measure of skewness (viz., the third moment of a distribution), the authors of PMPT argue that their volatility skewness measure has the advantage of being intuitively more understandable to non-statisticians who are the primary practical users of these tools. The importance of skewness lies in the fact that the more non-normal (i.e., skewed) a return series is, the more its true risk will be distorted by traditional MPT measures such as the Sharpe ratio. Thus, with the recent advent of hedging and derivative strategies, which are asymmetrical by design, MPT measures are essentially useless, while PMPT is able to capture significantly more of the true information contained in the returns under consideration. Many of the common market indices and the returns of stock and bond mutual funds cannot themselves always be assumed to be accurately represented by the normal distribution. Data: Monthly returns, January, 1991 through December, 1996. For a comprehensive survey of the early literature, see R. Libby and P.C. Fishburn [1977].
https://en.wikipedia.org/wiki/Post-modern_portfolio_theory
Language and Communication Technologies(LCT; also known ashuman language technologiesorlanguage technologyfor short) is the scientific study of technologies that explore language and communication. It is an interdisciplinary field that encompasses the fields ofcomputer science,linguisticsandcognitive science. One of the first problems to be studied in the 1950s, shortly after the invention of computers, was an LCT problem, namely the translation of human languages. The large amounts of funding poured intomachine translationtestifies to the perceived importance of the field, right from the beginning. It was also in this period that scholars started to develop theories of language and communication based on scientific methods. In the case of language, it wasNoam Chomskywho refines the goal of linguistics as a quest for a formal description of language,[1]whilstClaude ShannonandWarren Weaverprovided a mathematical theory that linked communication with information.[2] Computers and related technologies have provided a physical and conceptual framework within which scientific studies concerning the notion of communication within a computational framework could be pursued. Indeed, this framework has been fruitful on a number of levels. For a start, it has given birth to a new discipline, known asnatural language processing(NLP), orcomputational linguistics(CL). This discipline studies, from a computational perspective, all levels of language from the production of speech to the meanings of texts and dialogues. And over the past 40 years, NLP has produced an impressive computational infrastructure of resources, techniques, and tools for analyzing sound structure (phonology), word structure (morphology), grammatical structure (syntax) and meaning structure (semantics). As well as being important for language-based applications, this computational infrastructure makes it possible to investigate the structure of human language and communication at a deeper scientific level than was ever previously possible. Moreover, NLP fits in naturally with other branches of computer science, and in particular, withartificial intelligence(AI).[3]From an AI perspective, language use is regarded as a manifestation of intelligent behaviour by an active agent. The emphasis in AI-based approaches to language and communication is on the computational infrastructure required to integrate linguistic performance into a general theory of intelligent agents that includes, for example, learning generalizations on the basis of particular experience, the ability to plan and reason about intentionally produced utterances, the design of utterances that will fulfill a particular set of goals. Such work tends to be highly interdisciplinary in nature, as it needs to draw on ideas from such fields aslinguistics,cognitive psychology, andsociology. LCT draws on and incorporates knowledge and research from all these fields. Language and communication are so fundamental to human activity that it is not at all surprising to find that Language and Communication Technologies affect all major areas of society, including health, education, finance, commerce, and travel. Modern LCT is based on a dual tradition of symbols and statistics. This means that nowadays research on language requires access to large databases of information about words and their properties, to large scale computational grammars, to computational tools for working with all levels of language, and to efficient inference systems for performing reasoning. By working computationally it is possible to get to grips with the deeper structure of natural languages, and in particular, to model the crucial interactions between the various levels of language and other cognitive faculties. Relevant areas of research in LCT include: The increasing interest in the field is proved by the existence of several European Masters in this dynamic research area:[4]Degree programmes of the University of Groningeninclude Language and Communication Technologies. Erasmus MundusMasters:
https://en.wikipedia.org/wiki/Language_and_Communication_Technologies
Amononymis a name composed of only one word. An individual who is known and addressed by a mononym is amononymous person. A mononym may be the person's only name, given to them at birth. This was routine in most ancient societies, and remains common in modern societies such as inAfghanistan,[1]Bhutan, some parts ofIndonesia(especially by olderJavanesepeople),Myanmar,Mongolia,Tibet,[2]andSouth India. In other cases, a person may select a single name from theirpolynymor adopt a mononym as a chosen name,pen name,stage name, orregnal name. A popularnicknamemay effectively become a mononym, in some cases adopted legally. For some historical figures, a mononym is the only name that is still known today. The wordmononymcomes from Englishmono-("one", "single") and-onym("name", "word"), ultimately fromGreekmónos(μόνος, "single"), andónoma(ὄνομα, "name").[a][b] The structure of persons' names has varied across time and geography. In somesocieties, individuals have been mononymous, receiving only a single name.Alulim, first king ofSumer, is one of the earliest names known;Narmer, anancient Egyptianpharaoh, is another. In addition, Biblical names likeAdam,Eve,Moses, orAbraham, were typically mononymous, as were names in the surrounding cultures of theFertile Crescent.[4] Ancient Greeknames likeHeracles,Homer,Plato,Socrates, andAristotle, also follow the pattern, withepithets(similar to second names) only used subsequently by historians to distinguish between individuals with the same name, as in the case ofZeno the StoicandZeno of Elea; likewise,patronymicsor other biographic details (such ascityof origin, or another place name or occupation the individual was associated with) were used to specify whom one was talking about, but these details were not considered part of the name.[5] A departure from this custom occurred, for example, among theRomans, who by theRepublicanperiod and throughout theImperialperiodused multiple names: a male citizen's name comprised three parts (this was mostly typical of the upper class, while others would usually have only two names):praenomen(given name),nomen(clan name) andcognomen(family line within the clan) – thenomenandcognomenwere almost always hereditary.[6]Famous ancient Romans who today are usually referred to by mononym includeCicero(Marcus Tullius Cicero) andTerence(Publius Terentius Afer).Roman emperors, for exampleAugustus,Caligula, andNero, are also often referred to in English by mononym. Mononyms in other ancient cultures includeHannibal, theCelticqueenBoudica, and theNumidiankingJugurtha. During theearly Middle Ages, mononymity slowly declined, with northern and easternEuropekeeping the tradition longer than the south. TheDutch Renaissancescholar and theologianErasmusis a late example of mononymity; though sometimes referred to as "Desiderius Erasmus" or "Erasmus of Rotterdam", he was christened only as "Erasmus", after themartyrErasmus of Formiae.[7] Composers in thears novaandars subtiliorstyles of latemedieval musicwere often known mononymously—potentially because their names weresobriquets—such asBorlet,Egardus,Egidius,Grimace,Solage, andTrebor.[8] Naming practices ofindigenous peoples of the Americasare highly variable, with one individual often bearing more than one name over a lifetime. In European and American histories, prominent Native Americans are usually mononymous, using a name that was frequently garbled and simplified in translation. For example, the Aztec emperor whose name was preserved inNahuatldocuments asMotecuhzoma Xocoyotzinwas called "Montezuma" in subsequent histories. In current histories he is often namedMoctezuma II, using the European custom of assigningregnal numbersto hereditary heads of state. Native Americans from the 15th through 19th centuries, whose names are often thinly documented in written sources, are still commonly referenced with a mononym. Examples includeAnacaona(Haiti, 1464–1504),Agüeybaná(Puerto Rico, died 1510),Diriangén(Nicaragua, died 1523),Urracá(Panama, died 1531),Guamá(Cuba, died 1532),Atahualpa(Peru, 1497–1533),Lempira(Honduras, died 1537),Lautaro(Chile, 1534–1557),Tamanaco(Venezuela, died 1573),Pocahontas(United States, 1595–1617),Auoindaon(Canada, fl. 1623),Cangapol(Argentina, fl. 1735), andTecumseh(United States, 1768–1813). Prominent Native Americans having a parent of European descent often received a European-style polynym in addition to a name or names from their indigenous community. The name of the Dutch-Seneca diplomatCornplanteris a translation of aSeneca-languagemononym (Kaintwakon, roughly "corn-planter"). He was also called "John Abeel" after hisDutchfather. His later descendants, includingJesse Cornplanter, used "Cornplanter" as a surname instead of "Abeel". Some French authors have shown a preference for mononyms. In the 17th century, the dramatist and actor Jean-Baptiste Poquelin (1622–73) took the mononym stage name Molière.[9] In the 18th century, François-Marie Arouet (1694–1778) adopted the mononymVoltaire, for both literary and personal use, in 1718 after his imprisonment in Paris'Bastille, to mark a break with his past. The new name combined several features. It was ananagramfor aLatinizedversion (where "u" become "v", and "j" becomes "i") of his familysurname, "Arouet, l[e] j[eune]" ("Arouet, the young"); it reversed the syllables of the name of the town his father came from, Airvault; and it has implications of speed and daring through similarity to French expressions such asvoltige,volte-faceandvolatile. "Arouet" would not have served the purpose, given that name's associations with "roué" and with an expression that meant "for thrashing".[10] The 19th-century French authorMarie-Henri Beyle(1783–1842) used manypen names, most famously the mononym Stendhal, adapted from the name of the littlePrussiantown ofStendal, birthplace of the German art historianJohann Joachim Winckelmann, whom Stendhal admired.[11] Nadar[12](Gaspard-Félix Tournachon, 1820–1910) was an early French photographer. In the 20th century,Sidonie-Gabrielle Colette(1873–1954, author ofGigi, 1945), used her actual surname as her mononym pen name, Colette.[13] In the 17th and 18th centuries, most Italian castrato singers used mononyms as stage names (e.g.Caffarelli,Farinelli). The German writer, mining engineer, and philosopher Georg Friedrich Philipp Freiherr von Hardenberg (1772–1801) became famous asNovalis.[14] The 18th-century Italian painterBernardo Bellotto, who is now ranked as an important and original painter in his own right, traded on the mononymous pseudonym of his uncle and teacher, Antonio Canal (Canaletto), in those countries—Poland and Germany—where his famous uncle was not active, calling himself likewise "Canaletto". Bellotto remains commonly known as "Canaletto" in those countries to this day.[15] The 19th-century Dutch writer Eduard Douwes Dekker (1820–87), better known by his mononymous pen nameMultatuli[16](from theLatinmulta tuli, "I have suffered [orborne] many things"), became famous for the satirical novel,Max Havelaar(1860), in which he denounced the abuses ofcolonialismin theDutch East Indies(nowIndonesia). The 20th-century British authorHector Hugh Munro(1870–1916) became known by hispen name, Saki. In 20th-century Poland, thetheater-of-the-absurdplaywright, novelist,painter, photographer, andphilosopherStanisław Ignacy Witkiewicz(1885–1939) after 1925 often used the mononymous pseudonym Witkacy, aconflationof his surname (Witkiewicz) andmiddle name(Ignacy).[17] Monarchsand otherroyalty, for exampleNapoleon, have traditionally availed themselves of theprivilegeof using a mononym, modified when necessary by anordinalorepithet(e.g., QueenElizabeth IIorCharles the Great). This is not always the case: KingCarl XVI Gustafof Sweden has two names. While many European royals have formally sportedlong chainsof names, in practice they have tended to use only one or two and not to usesurnames.[c] In Japan, the emperor and his family have no surname, only a given name, such asHirohito, which in practice in Japanese is rarely used: out of respect and as a measure of politeness, Japanese prefer to say "the Emperor" or "the Crown Prince".[19] Roman Catholicpopeshave traditionally adopted a single,regnal nameupon theirelection.John Paul Ibroke with this tradition – adopting a double name honoring his two predecessors[20]– and his successorJohn Paul IIfollowed suit, butBenedict XVIreverted to the use of a single name. Surnames were introduced inTurkeyonly afterWorld War I, by the country's first president,Mustafa Kemal Atatürk, as part of his Westernization and modernization programs.[21] SomeNorth American Indigenouspeople continue their nations' traditional naming practices, which may include the use of single names. InCanada, where government policy often included the imposition of Western-style names, one of the recommendations of theTruth and Reconciliation Commission of Canadawas for all provinces and territories to waive fees to allow Indigenous people to legally assume traditional names, including mononyms.[22]InOntario, for example, it is now legally possible to change to a single name or register one at birth, for members ofIndigenous nationswhich have a tradition of single names.[23] In modern times, in countries that have long been part of theEast Asian cultural sphere(Japan, the Koreas, Vietnam, and China), mononyms are rare. An exception pertains to theEmperor of Japan. In the past, mononyms were common inIndonesia, especially inJavanese names.[24]Some younger people may have them, but this practice is becoming rarer, since mononyms are no longer allowed for newborns since 2022 (seeNaming law § Indonesia).[25] Single names still also occur inTibet.[2]MostAfghansalso have no surname.[26] InBhutan, most people use either only one name or a combination of two personal names typically given by a Buddhist monk. There are no inherited family names; instead, Bhutanese differentiate themselves with nicknames or prefixes.[27] In theNear East'sArabworld, the Syrian poet Ali Ahmad Said Esber (born 1930) at age 17 adopted the mononym pseudonym,Adunis, sometimes also spelled "Adonis". A perennial contender for the Nobel Prize in Literature, he has been described as the greatest living poet of the Arab world.[28] In the West, mononymity, as well as its use by royals in conjunction with titles, has been primarily used or given to famous people such as prominent writers,artists,entertainers, musicians andathletes.[d] ThecomedianandillusionistTeller, the silent half of the duoPenn & Teller, legally changed his original polynym, Raymond Joseph Teller, to the mononym "Teller" and possesses aUnited States passportissued in that single name.[30][31]Similarly,Kanye Westlegally changed his name to the mononym "Ye".[32] In Brazil, it is very common for footballers to go by one name for simplicity and as a personal brand. Examples includePelé,RonaldoandKaká. Brazil's PresidentLuiz Inácio Lula da Silvais known as "Lula", a nickname he officially added to his full name. Such mononyms, which take their origin ingiven names,surnamesornicknames, are often used becausePortuguese namestend to be rather long. In Australia, where nicknames and short names are extremely common, individuals with long names of European origin (such as formerPremier of New South WalesGladys Berejiklian, who is of Armenian descent, and soccer managerAnge Postecoglou, who was born in Greece) will often be referred to by a mononym, even in news headlines. Similarly, Greek basketball playerGiannis Antetokounmpois often referred to outside Greece as just "Giannis" due to the length of his last name. Western computer systems do not always support monynyms, most still requiring a given name and a surname. Some companies get around this by entering the mononym as both the given name and the surname. Mononyms are commonly used by many association footballers. A large number of Brazilian footballers use mononyms, such asAlisson,Kaká,Neymar,RonaldoandRonaldinho. Players from other countries where Portuguese is spoken, such as Portugal itself and Lusophone countries in Africa, also occasionally use mononyms, such asBruma,Otávio,Pepe,TotiandVitinhafrom Portugal. Australian managerAnge Postecoglouand Spanish managerPep Guardiolaare commonly known as "Ange" and "Pep", even in news headlines.
https://en.wikipedia.org/wiki/Mononymous_persons
Inmathematics,integer factorizationis the decomposition of apositive integerinto aproductof integers. Every positive integer greater than 1 is either the product of two or more integerfactorsgreater than 1, in which case it is acomposite number, or it is not, in which case it is aprime number. For example,15is a composite number because15 = 3 · 5, but7is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is calledprime factorization; the result is always unique up to the order of the factors by theprime factorization theorem. To factorize a small integernusing mental or pen-and-paper arithmetic, the simplest method istrial division: checking if the number is divisible by prime numbers2,3,5, and so on, up to thesquare rootofn. For larger numbers, especially when using a computer, various more sophisticated factorization algorithms are more efficient. A prime factorization algorithm typically involvestesting whether each factor is primeeach time a factor is found. When the numbers are sufficiently large, no efficient non-quantuminteger factorizationalgorithmis known. However, it has not been proven that such an algorithm does not exist. The presumeddifficultyof this problem is important for the algorithms used incryptographysuch asRSA public-key encryptionand theRSA digital signature.[1]Many areas ofmathematicsandcomputer sciencehave been brought to bear on this problem, includingelliptic curves,algebraic number theory, and quantum computing. Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) aresemiprimes, the product of two prime numbers. When they are both large, for instance more than two thousandbitslong, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization byFermat's factorization method), even the fastest prime factorization algorithms on the fastest classical computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any classical computer increases drastically. Many cryptographic protocols are based on the presumed difficulty of factoring large composite integers or a related problem –for example, theRSA problem. An algorithm that efficiently factors an arbitrary integer would renderRSA-basedpublic-keycryptography insecure. By thefundamental theorem of arithmetic, every positive integer has a uniqueprime factorization. (By convention, 1 is theempty product.)Testingwhether the integer is prime can be done inpolynomial time, for example, by theAKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors. Given a general algorithm for integer factorization, any integer can be factored into its constituentprime factorsby repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, ifn= 171 ×p×qwherep<qare very large primes,trial divisionwill quickly produce the factors 3 and 19 but will takepdivisions to find the next factor. As a contrasting example, ifnis the product of the primes13729,1372933, and18848997161, where13729 × 1372933 = 18848997157, Fermat's factorization method will begin with⌈√n⌉ = 18848997159which immediately yieldsb=√a2−n=√4= 2and hence the factorsa−b= 18848997157anda+b= 18848997161. While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of⌈√18848997157⌉ = 137292forais a factor of 10 from1372933. Among theb-bit numbers, the most difficult to factor in practice using existing algorithms are thosesemiprimeswhose factors are of similar size. For this reason, these are the integers used in cryptographic applications. In 2019, a 240-digit (795-bit) number (RSA-240) was factored by a team of researchers includingPaul Zimmermann, utilizing approximately 900 core-years of computing power.[2]These researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.[3] The largest such semiprime yet factored wasRSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using IntelXeon Gold6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of thegeneral number field sieverun on hundreds of machines. Noalgorithmhas been published that can factor all integers inpolynomial time, that is, that can factor ab-bit numbernin timeO(bk)for some constantk. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist.[4][5] There are published algorithms that are faster thanO((1 +ε)b)for all positiveε, that is,sub-exponential. As of 2022[update], the algorithm with best theoretical asymptotic running time is thegeneral number field sieve(GNFS), first published in 1993,[6]running on ab-bit numbernin time: For current computers, GNFS is the best published algorithm for largen(more than about 400 bits). For aquantum computer, however,Peter Shordiscovered an algorithm in 1994 that solves it in polynomial time.Shor's algorithmtakes onlyO(b3)time andO(b)space onb-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by usingNMRtechniques on molecules that provide seven qubits.[7] In order to talk aboutcomplexity classessuch as P, NP, and co-NP, the problem has to be stated as adecision problem. Decision problem(Integer factorization)—For every natural numbersn{\displaystyle n}andk{\displaystyle k}, doesnhave a factor smaller thankbesides 1? It is known to be in bothNPandco-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorizationn=d(⁠n/d⁠)withd≤k. An answer of "no" can be certified by exhibiting the factorization ofninto distinct primes, all larger thank; one can verify their primality using theAKS primality test, and then multiply them to obtainn. Thefundamental theorem of arithmeticguarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in bothUPand co-UP.[8]It is known to be inBQPbecause of Shor's algorithm. The problem is suspected to be outside all three of the complexity classes P, NP-complete,[9]andco-NP-complete. It is therefore a candidate for theNP-intermediatecomplexity class. In contrast, the decision problem "Isna composite number?" (or equivalently: "Isna prime number?") appears to be much easier than the problem of specifying factors ofn. The composite/prime problem can be solved in polynomial time (in the numberbof digits ofn) with theAKS primality test. In addition, there are severalprobabilistic algorithmsthat can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease ofprimality testingis a crucial part of theRSAalgorithm, as it is necessary to find large prime numbers to start with. A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms. An important subclass of special-purpose factoring algorithms is theCategory 1orFirst Categoryalgorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors.[10]For example, naivetrial divisionis a Category 1 algorithm. A general-purpose factoring algorithm, also known as aCategory 2,Second Category, orKraitchikfamilyalgorithm,[10]has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factorRSA numbers. Most general-purpose factoring algorithms are based on thecongruence of squaresmethod. In number theory, there are many integer factoring algorithms that heuristically have expectedrunning time inlittle-oandL-notation. Some examples of those algorithms are theelliptic curve methodand thequadratic sieve. Another such algorithm is theclass group relations methodproposed by Schnorr,[11]Seysen,[12]and Lenstra,[13]which they proved only assuming the unprovedgeneralized Riemann hypothesis. The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance[14]to have expected running timeLn[⁠1/2⁠, 1+o(1)]by replacing the GRH assumption with the use of multipliers. The algorithm uses theclass groupof positive binaryquadratic formsofdiscriminantΔdenoted byGΔ.GΔis the set of triples of integers(a,b,c)in which those integers are relative prime. Given an integernthat will be factored, wherenis an odd positive integer greater than a certain constant. In this factoring algorithm the discriminantΔis chosen as a multiple ofn,Δ = −dn, wheredis some positive multiplier. The algorithm expects that for onedthere exist enoughsmoothforms inGΔ. Lenstra and Pomerance show that the choice ofdcan be restricted to a small set to guarantee the smoothness result. Denote byPΔthe set of all primesqwithKronecker symbol(⁠Δ/q⁠)= 1. By constructing a set ofgeneratorsofGΔand prime formsfqofGΔwithqinPΔa sequence of relations between the set of generators andfqare produced. The size ofqcan be bounded byc0(log|Δ|)2for some constantc0. The relation that will be used is a relation between the product of powers that is equal to theneutral elementofGΔ. These relations will be used to construct a so-called ambiguous form ofGΔ, which is an element ofGΔof order dividing 2. By calculating the corresponding factorization ofΔand by taking agcd, this ambiguous form provides the complete prime factorization ofn. This algorithm has these main steps: Letnbe the number to be factored. To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and theJacobi sum test. The algorithm as stated is aprobabilistic algorithmas it makes random choices. Its expected running time is at mostLn[⁠1/2⁠, 1+o(1)].[14]
https://en.wikipedia.org/wiki/Integer_factorization
Non-standard positional numeral systemshere designatesnumeral systemsthat may loosely be described aspositional systems, but that do not entirely comply with the following description of standard positional systems: This article summarizes facts on some non-standard positional numeral systems. In most cases, the polynomial form in the description of standard systems still applies. Some historical numeral systems may be described as non-standard positional numeral systems. E.g., thesexagesimalBabylonian notationand the Chineserod numerals, which can be classified as standard systems of base 60 and 10, respectively, counting the space representing zero as a numeral, can also be classified as non-standard systems, more specifically, mixed-base systems with unary components, considering the primitive repeatedglyphsmaking up the numerals. However, most of the non-standard systems listed below have never been intended for general use, but were devised by mathematicians or engineers for special academic or technical use. Abijective numeral systemwith basebusesbdifferent numerals to represent all non-negative integers. However, the numerals have values 1, 2, 3, etc. up to and includingb, whereas zero is represented by an empty digit string. For example, it is possible to havedecimal without a zero. Unary is the bijective numeral system with baseb= 1. In unary, one numeral is used to represent all positive integers. The value of the digit stringpqrsgiven by the polynomial form can be simplified intop+q+r+ssincebn= 1 for alln. Non-standard features of this system include: In some systems, while the base is a positive integer, negative digits are allowed.Non-adjacent formis a particular system where the base isb= 2. In thebalanced ternarysystem, the base isb= 3, and the numerals have the values −1, 0 and +1 (rather than 0, 1 and 2 as in the standardternary system, or 1, 2 and 3 as in the bijective ternary system). The reflected binary code, also known as the Gray code, is closely related tobinary numbers, but somebitsare inverted, depending on the parity of the higher order bits. Cistercian numeralsare a decimal positional numeral system, but the positions are not aligned as in common decimal notation; instead, they are attached to the top-right, top-left, bottom-right and bottom-left of a vertical stem, respectively, and thus limited to four in number (so only integers from 0 to 9999 can be represented). The system has close similarities to standard positional numeral systems, but may also be compared to e.g.Greek numerals, where different sets of symbols (in fact,Greek letters) are used for the ones, tens, hundreds and thousands, likewise giving an upper limit on the numbers that can be represented. Similarly, in computers, e.g. thelong integerformat is a standard binary system (apart from the sign bit), but it has a limited number of positions, and the physical locations for the representations of the digits may not be aligned. In an analogodometerand in anabacus, the decimal digits are aligned but limited in number. A few positional systems have been suggested in which the basebis not a positive integer. Negative-base systems includenegabinary,negaternaryandnegadecimal, with bases −2, −3, and −10 respectively; in base −bthe number of different numerals used isb. Due to the properties of negative numbers raised to powers, all integers, positive and negative, can be represented without a sign. In a purely imaginary basebisystem, wherebis an integer larger than 1 anditheimaginary unit, the standard set of digits consists of theb2numbers from 0 tob2− 1. It can be generalized to other complex bases, giving rise to thecomplex-base systems. In non-integer bases, the number of different numerals used clearly cannot beb. Instead, the numerals 0 to⌊b⌋{\displaystyle \lfloor b\rfloor }are used. For example,golden ratio base(phinary), uses the 2 different numerals 0 and 1. It is sometimes convenient to consider positional numeral systems where the weights associated with the positions do not form ageometric sequence1,b,b2,b3, etc., starting from the least significant position, as given in the polynomial form. Examples include: Sequences where each weight isnotan integer multiple of the previous weight may also be used, but then every integer may not have a unique representation. For example,Fibonacci codinguses the digits 0 and 1, weighted according to theFibonacci sequence(1, 2, 3, 5, 8, ...); a unique representation of all non-negative integers may be ensured by forbidding consecutive 1s.Binary-coded decimal(BCD) are mixed base systems where bits (binary digits) are used to express decimal digits. E.g., in 1001 0011, each group of four bits may represent a decimal digit (in this example 9 and 3, so the eight bits combined represent decimal 93). The weights associated with these 8 positions are 80, 40, 20, 10, 8, 4, 2 and 1. Uniqueness is ensured by requiring that, in each group of four bits, if the first bit is 1, the next two must be 00. Asymmetric numeral systems are systems used incomputer sciencewhere each digit can have different bases, usually non-integer. In these, not only are the bases of a given digit different, they can be also nonuniform and altered in an asymmetric way to encode information more efficiently. They are optimized for chosen non-uniform probability distributions of symbols, using on average approximatelyShannon entropybits per symbol.[1]
https://en.wikipedia.org/wiki/Non-standard_positional_numeral_systems
Inmathematics, ahalf-exponential functionis afunctional square rootof anexponential function. That is, afunctionf{\displaystyle f}such thatf{\displaystyle f}composedwith itself results in an exponential function:[1][2]f(f(x))=abx,{\displaystyle f{\bigl (}f(x){\bigr )}=ab^{x},}for some constantsa{\displaystyle a}andb{\displaystyle b}. Hellmuth Kneserfirst proposed aholomorphicconstruction of the solution off(f(x))=ex{\displaystyle f{\bigl (}f(x){\bigr )}=e^{x}}in 1950. It is closely related to the problem of extendingtetrationto non-integer values; the value of12a{\displaystyle {}^{\frac {1}{2}}a}can be understood as the value off(1){\displaystyle f{\bigl (}1)}, wheref(x){\displaystyle f{\bigl (}x)}satisfiesf(f(x))=ax{\displaystyle f{\bigl (}f(x){\bigr )}=a^{x}}. Example values from Kneser's solution off(f(x))=ex{\displaystyle f{\bigl (}f(x){\bigr )}=e^{x}}includef(0)≈0.49856{\displaystyle f{\bigl (}0)\approx 0.49856}andf(1)≈1.64635{\displaystyle f{\bigl (}1)\approx 1.64635}. If a functionf{\displaystyle f}is defined using the standard arithmetic operations, exponentials,logarithms, andreal-valued constants, thenf(f(x)){\displaystyle f{\bigl (}f(x){\bigr )}}is either subexponential or superexponential.[3]Thus, aHardyL-functioncannot be half-exponential. Any exponential function can be written as the self-compositionf(f(x)){\displaystyle f(f(x))}for infinitely many possible choices off{\displaystyle f}. In particular, for everyA{\displaystyle A}in theopen interval(0,1){\displaystyle (0,1)}and for everycontinuousstrictly increasingfunctiong{\displaystyle g}from[0,A]{\displaystyle [0,A]}onto[A,1]{\displaystyle [A,1]}, there is an extension of this function to a continuous strictly increasing functionf{\displaystyle f}on the real numbers such thatf(f(x))=exp⁡x{\displaystyle f{\bigl (}f(x){\bigr )}=\exp x}.[4]The functionf{\displaystyle f}is the unique solution to thefunctional equationf(x)={g(x)ifx∈[0,A],exp⁡g−1(x)ifx∈(A,1],exp⁡f(ln⁡x)ifx∈(1,∞),ln⁡f(exp⁡x)ifx∈(−∞,0).{\displaystyle f(x)={\begin{cases}g(x)&{\mbox{if }}x\in [0,A],\\\exp g^{-1}(x)&{\mbox{if }}x\in (A,1],\\\exp f(\ln x)&{\mbox{if }}x\in (1,\infty ),\\\ln f(\exp x)&{\mbox{if }}x\in (-\infty ,0).\\\end{cases}}} A simple example, which leads tof{\displaystyle f}having a continuous first derivativef′{\displaystyle f'}everywhere, and also causesf″≥0{\displaystyle f''\geq 0}everywhere (i.e.f(x){\displaystyle f(x)}is concave-up, andf′(x){\displaystyle f'(x)}increasing, for all realx{\displaystyle x}), is to takeA=12{\displaystyle A={\tfrac {1}{2}}}andg(x)=x+12{\displaystyle g(x)=x+{\tfrac {1}{2}}}, givingf(x)={loge⁡(ex+12)ifx≤−loge⁡2,ex−12if−loge⁡2≤x≤0,x+12if0≤x≤12,ex−1/2if12≤x≤1,xeif1≤x≤e,ex/eife≤x≤e,xeife≤x≤ee,ex1/eifee≤x≤ee,…{\displaystyle f(x)={\begin{cases}\log _{e}\left(e^{x}+{\tfrac {1}{2}}\right)&{\mbox{if }}x\leq -\log _{e}2,\\e^{x}-{\tfrac {1}{2}}&{\mbox{if }}{-\log _{e}2}\leq x\leq 0,\\x+{\tfrac {1}{2}}&{\mbox{if }}0\leq x\leq {\tfrac {1}{2}},\\e^{x-1/2}&{\mbox{if }}{\tfrac {1}{2}}\leq x\leq 1,\\x{\sqrt {e}}&{\mbox{if }}1\leq x\leq {\sqrt {e}},\\e^{x/{\sqrt {e}}}&{\mbox{if }}{\sqrt {e}}\leq x\leq e,\\x^{\sqrt {e}}&{\mbox{if }}e\leq x\leq e^{\sqrt {e}},\\e^{x^{1/{\sqrt {e}}}}&{\mbox{if }}e^{\sqrt {e}}\leq x\leq e^{e},\ldots \\\end{cases}}}Crone and Neuendorffer claim that there is no semi-exponential function f(x) that is both (a) analytic and (b) always maps reals to reals. Thepiecewisesolution above achieves goal (b) but not (a). Achieving goal (a) is possible by writingex{\displaystyle e^{x}}as a Taylor series based at a fixpoint Q (there are an infinitude of such fixpoints, but they all are nonreal complex, for exampleQ=0.3181315+1.3372357i{\displaystyle Q=0.3181315+1.3372357i}), making Q also be a fixpoint of f, that isf(Q)=eQ=Q{\displaystyle f(Q)=e^{Q}=Q}, then computing theMaclaurin seriescoefficients off(x−Q){\displaystyle f(x-Q)}one by one. This results in Kneser's construction mentioned above. Half-exponential functions are used incomputational complexity theoryfor growth rates "intermediate" between polynomial and exponential.[2]A functionf{\displaystyle f}grows at least as quickly as some half-exponential function (its composition with itself grows exponentially) if it isnon-decreasingandf−1(xC)=o(log⁡x){\displaystyle f^{-1}(x^{C})=o(\log x)}, foreveryC>0{\displaystyle C>0}.[5]
https://en.wikipedia.org/wiki/Half-exponential_function
Photon transport theories inPhysics,Medicine, andStatistics(such as theMonte Carlo method), are commonly used to modellight propagation in tissue. The responses to apencil beamincident on a scattering medium are referred to asGreen's functionsorimpulse responses. Photon transport methods can be directly used to compute broad-beam responses by distributing photons over the cross section of the beam. However,convolutioncan be used in certain cases to improve computational efficiency. In order for convolution to be used to calculate a broad-beam response, a system must betime invariant,linear, andtranslation invariant. Time invariance implies that a photon beam delayed by a given time produces a response shifted by the same delay. Linearity indicates that a given response will increase by the same amount if the input is scaled and obeys the property ofsuperposition. Translational invariance means that if a beam is shifted to a new location on the tissue surface, its response is also shifted in the same direction by the same distance. Here, only spatial convolution is considered. Responses from photon transport methods can be physical quantities such asabsorption,fluence,reflectance, ortransmittance. Given a specific physical quantity,G(x,y,z), from a pencil beam in Cartesian space and a collimated light source with beam profileS(x,y), a broad-beam response can be calculated using the following 2-D convolution formula: Similar to 1-D convolution, 2-D convolution is commutative betweenGandSwith a change of variablesx″=x−x′{\displaystyle x''=x-x'\,}andy″=y−y′{\displaystyle y''=y-y'\,}: Because the broad-beam responseC(x,y,z){\displaystyle C(x,y,z)\,}has cylindrical symmetry, its convolution integrals can be rewritten as: wherer′=x′2+y′2{\displaystyle r'={\sqrt {x'^{2}+y'^{2}}}}. Because the inner integration of Equation 4 is independent ofz, it only needs to be calculated once for all depths. Thus this form of the broad-beam response is more computationally advantageous. For aGaussian beam, the intensity profile is given by Here,Rdenotes the1e2{\displaystyle {\tfrac {1}{e^{2}}}\,}radius of the beam, andS0denotes the intensity at the center of the beam.S0is related to the total powerP0by Substituting Eq. 5 into Eq. 4, we obtain whereI0is the zeroth-ordermodified Bessel function. For atop-hat beamof radiusR, the source function becomes whereS0denotes the intensity inside the beam.S0is related to the total beam powerP0by Substituting Eq. 8 into Eq. 4, we obtain where First photon-tissue interactions always occur on the z axis and hence contribute to the specific absorption or related physical quantities as aDirac delta function. Errors will result if absorption due to the first interactions is not recorded separately from absorption due to subsequent interactions. The total impulse response can be expressed in two parts: where the first term results from the first interactions and the second, from subsequent interactions. For a Gaussian beam, we have For a top-hat beam, we have For a top-hat beam, the upper integration limits may be bounded byrmax, such thatr≤rmax−R. Thus, the limited grid coverage in therdirection does not affect the convolution. To convolve reliably for physical quantities atrin response to a top-hat beam, we must ensure thatrmaxin photon transport methods is large enough thatr≤rmax−Rholds. For a Gaussian beam, no simple upper integration limits exist because it theoretically extends to infinity. Atr>>R, a Gaussian beam and a top-hat beam of the sameRandS0have comparable convolution results. Therefore,r≤rmax−Rcan be used approximately for Gaussian beams as well. There are two common methods used to implement discrete convolution: the definition of convolution andfast Fourier transformation(FFT and IFFT) according to theconvolution theorem. To calculate the optical broad-beam response, the impulse response of a pencil beam is convolved with the beam function. As shown by Equation 4, this is a 2-D convolution. To calculate the response of a light beam on a plane perpendicular to the z axis, the beam function (represented by ab × bmatrix) is convolved with the impulse response on that plane (represented by ana×amatrix). Normallyais greater thanb. The calculation efficiency of these two methods depends largely onb, the size of the light beam. In direct convolution, the solution matrix is of the size (a+b− 1) × (a+b− 1). The calculation of each of these elements (except those near boundaries) includesb×bmultiplications andb×b− 1 additions, so thetime complexityisO[(a+b)2b2]. Using the FFT method, the major steps are the FFT and IFFT of (a+b− 1) × (a+b− 1) matrices, so the time complexity is O[(a+b)2log(a+b)]. Comparing O[(a+b)2b2] and O[(a+b)2log(a+b)], it is apparent that direct convolution will be faster ifbis much smaller thana, but the FFT method will be faster ifbis relatively large.
https://en.wikipedia.org/wiki/Convolution_for_optical_broad-beam_responses_in_scattering_media
This generationallist of Intel processorsattempts to present all ofIntel'sprocessorsfrom the4-bit4004(1971) to the present high-end offerings. Concise technical data is given for each product. An iterative refresh of Raptor Lake-S desktop processors, called the 14th generation of Intel Core, was launched on October 17, 2023.[1][2] CPUs inboldbelow feature ECC memory support when paired with a motherboard based on the W680 chipset according to each respective Intel Ark product page. Processor An iterative refresh of Raptor Lake-HX mobile processors, called the 14th generation of Intel Core, was launched on Jan 9, 2024[3] family family family (threads) cache Turbo for 11th Gen Processors All processors are listed in chronological order. First commercially availablemicroprocessor(single-chip IC processor) MCS-4 family: They are ICs with CPU, RAM, ROM (or PROM or EPROM), I/O Ports, Timers & Interrupts MCS-48family: MCS-51family: MCS-151family: MCS-251family: Introduced in the third quarter of 1974, thesebit-slicingcomponents usedbipolarSchottky transistors. Each component implemented two bits of aprocessorfunction; packages could be interconnected to build a processor with any desired word length. Members of the 3000 family: Bus width 2nbits data/address (depending on numbernof slices used) Pentium II Xeon(chronological entry) XScale(chronological entry – non-x86 architecture) Pentium 4 (not 4EE, 4E, 4F), Itanium, P4-based Xeon, Itanium 2(chronological entries) Itanium(chronological entry – new non-x86 architecture) Itanium 2(chronological entry – new non-x86 architecture) Westmere Not listed (yet) are several Broadwell-based CPU models:[20] Note: this list does not say that all processors that match these patterns are Broadwell-based or fit into this scheme. The model numbers may have suffixes that are not shown here. Many Skylake-based processors are not yet listed in this section: mobile i3/i5/i7 processors (U, H, and M suffixes), embedded i3/i5/i7 processors (E suffix), certain i7-67nn/i7-68nn/i7-69nn.[21]Skylake-based "Core X-series" processors (certain i7-78nn and i9-79nn models) can be found under current models. Intel discontinued the use of part numbers such as 80486 in the marketing of mainstream x86-architecture processors with the introduction of thePentiumbrand in 1993. However, numerical codes, in the 805xx range, continued to be assigned to these processors for internal and part numbering uses. The following is a list of such product codes in numerical order:
https://en.wikipedia.org/wiki/List_of_Intel_processors
Asock puppet,sock puppet account, or simplysockis a false online identity used for deceptive purposes.[1]The term originally referred to ahand puppet made from a sock. Sock puppets include online identities created to praise, defend, or support a person or organization,[2]to manipulate public opinion,[3]or to circumvent restrictions such as viewing a social media account that a user is blocked from. Sock puppets are unwelcome in many online communities and forums. The practice of writing pseudonymous self-reviews began before the Internet. WritersWalt WhitmanandAnthony Burgesswrote pseudonymous reviews of their own books,[4]as didBenjamin Franklin.[5] TheOxford English Dictionarydefines the term without reference to the internet, as "a person whose actions are controlled by another; a minion" with a 2000 citation fromU.S. News & World Report.[6] Wikipediahas had a long history of problems with sockpuppetry. On October 21, 2013, theWikimedia Foundation(WMF) condemned paid advocacy sockpuppeting on Wikipedia and, two days later on October 23, specifically bannedWiki-PR editing of Wikipedia.[7]In August and September 2015, the WMF uncovered another group of sockpuppets known asOrangemoody.[8] One reason for sockpuppeting is to circumvent a block, ban, or other form of sanction imposed on the person's original account.[9] Sockpuppets may be created during an online poll to increase the puppeteer's votes. A related usage is the creation of multiple identities, each supporting the puppeteer's views in an argument, attempting to position the puppeteer as representing majority opinion and sideline opposition voices. In the abstract theory ofsocial networksandreputation systems, this is known as aSybil attack.[10] A sockpuppet-like use of deceptive fake identities is used instealth marketing. The stealth marketer creates one or more pseudonymous accounts, each claiming to be a different enthusiastic supporter of the sponsor's product, book or ideology.[11] Astrawman sockpuppet(sometimes abbreviated asstrawpuppet) is afalse flagpseudonym created to make a particular point of view look foolish or unwholesome in order to generate negative sentiment against it. Strawman sockpuppets typically behave in an unintelligent, uninformed, orbigotedmanner, advancing "straw man" arguments that their puppeteers can easily refute. The intended effect is to discredit more rational arguments made for the same position.[12] Such sockpuppets behave in a similar manner toInternet trolls. A particular case is theconcern troll, a false flag pseudonym created by a user whose actual point of view is opposed to that of the sockpuppet. The concern troll posts in web forums devoted to its declared point of view and attempts to sway the group's actions or opinions while claiming to share their goals, but with professed "concerns". The goal is to sowfear, uncertainty and doubt(FUD) within the group.[citation needed] Some sources have used the termmeatpuppetas a synonym for sock puppet,[13][14][15]thoughmeatpuppetis more commonly accepted[by whom?]to be an account that is run by a person other than the puppeteer, yet used to accomplish the same goals as a typical sock puppet.[citation needed] A number of techniques have been developed to determine whether accounts are sockpuppets, including comparing theIP addressesof suspected sockpuppets and comparative analysis of thewriting styleof suspected sockpuppets.[16]UsingGeoIPit is possible to look up the IP addresses and locate them.[17] In 2006, Missouri resident Lori Drew created aMySpaceaccount purporting to be operated by a fictitious 16-year-old boy named Josh Evans. He began an online relationship withMegan Meier, a 13-year-old girl who had allegedly been in conflict with Drew's daughter. After "Josh Evans" ended the relationship with Meier, the latter died of suicide.[18][19] In 2008, Thomas O'Brien,United States Attorneyfor theCentral District of California, charged Drew, then 49, with four felony counts: one count of conspiracy to violate theComputer Fraud and Abuse Act(CFAA), which prohibits "accessing a computer without authorization viainterstate commerce", and three counts of violation of the CFAA, alleging she violated MySpace's terms of service by misrepresenting herself. O'Brien justified his prosecution of the case because MySpace's servers were located in his jurisdiction. The jury convicted Drew of three misdemeanor counts, dismissing one on the grounds prosecutors had failed to demonstrate Drew inflicted emotional distress on Meier.[20][21] During sentencing arguments, prosecutors argued for the maximum sentence for the statute: three years in prison and a fine of $300,000. Drew's lawyers argued her use of a false identity did not constitute unauthorized access to MySpace, citingPeople v. Donell, a 1973breach of contractdispute, in which a court of appeals ruled "fraudulently induced consent is consent nonetheless."[22]JudgeGeorge H. Wudismissed the charges before sentencing.[23] In 2010, 50-year-old lawyer Raphael Golb was convicted on 30 criminal charges, includingidentity theft, criminal impersonation, and aggravated harassment, for using multiple sockpuppet accounts to attack and impersonate historians he perceived as rivals of his father,Norman Golb.[24]Golb defended his actions as "satirical hoaxes" protected by free-speech rights. He was disbarred and sentenced to six months in prison, but the sentence was reduced to probation on appeal.[25] In 2014, a Florida state circuit court held that sock puppetry istortious interferencewith business relations and awarded injunctive relief against it during the pendency of litigation. The court found that "the act of falsifying multiple identities" is conduct that should be enjoined. It explained that the conduct was wrongful "not because the statements are false or true, but because the conduct of making up names of persons who do not exist to post fake comments by fake people to support Defendants' position tortiously interferes with Plaintiffs' business" and such "conduct is inherently unfair."[26] The court, therefore, ordered the defendants to "remove or cause to be removed all postings creating the false impression that more [than one] person are commenting on the program th[an] actually exist." The court also found, however, that the comments of the defendants "which do not create a false impression of fake patients or fake employees, or fake persons connected to program (those posted under their respective names) are protected by The Constitution of the United States of America, First Amendment."[26] In 2007, the CEO ofWhole Foods,John Mackey, was discovered to have posted as "Rahodeb" on theYahoo!Finance Message Board, extolling his own company and predicting a dire future for its rival,Wild Oats Markets, while concealing his relationship to both companies. Whole Foods argued that none of Mackey's actions broke the law.[27][28] During the 2007 trial ofConrad Black, chief executive ofHollinger International, prosecutors alleged that he had posted messages on a Yahoo! Finance chat room using the name "nspector", attackingshort sellersand blaming them for his company's stock performance. Prosecutors provided evidence of these postings inBlack's criminal trial, where he was convicted of mail fraud and obstruction. The postings were raised at multiple points in the trial.[27] Anamazon.comcomputer glitch in 2004 revealed the names of many authors who had written pseudonymous reviews of their books.John Rechy, who wrote the best-selling novelCity of Night(1963), was among the authors unmasked in this way, and was shown to have written numerous five-star reviews of his own work.[4]In 2010, historianOrlando Figeswas found to have written Amazon reviews under the names "orlando-birkbeck" and "historian", praising his own books and criticizing those of historiansRachel PolonskyandRobert Service. The two sued Figes and won monetary damages.[29][30] During a panel discussion at a British Crime Writers Festival in 2012, authorStephen Leatheradmitted using pseudonyms to praise his own books, claiming that "everyone does it". He spoke of building a "network of characters", some operated by his friends, who discussed his books and had conversations with him directly.[31]The same year, after he was pressured by the spy novelistJeremy Dunson Twitter, who had detected possible indications online, UK crime fiction writerR.J. Elloryadmitted having used a pseudonymous account name to write a positive review for each of his own novels, and additionally a negative review for two other authors.[32][33] David Manningwas a fictitiousfilm critic, created by a marketing executive working forSony Corporationto give consistently good reviews for releases from Sony subsidiaryColumbia Pictures, which could then be quoted in promotional material.[34] American reporterMichael Hiltzikwas temporarily suspended from posting to his blog, "The Golden State", on theLos Angeles Timeswebsite after he admitted "posting there, as well as on other sites, under false names." He used the pseudonyms to attack conservatives such asHugh Hewittand L.A. prosecutor Patrick Frey—who eventually exposed him.[35][36]Hiltzik's blog at theLA Timeswas the newspaper's first blog. While suspended from blogging, Hiltzik continued to write regularly for the newspaper. Lee Siegel, a writer forThe New Republicmagazine, was suspended for defending his articles and blog comments under the username "Sprezzatura". In one such comment, "Sprezzatura" defended Siegel's bad reviews ofJon Stewart: "Siegel is brave, brilliant and wittier than Stewart will ever be."[37][38] In late November 2020,TYT Networkreported an example of awhite maleRepublican PartyDonald Trumpvoter having a sockpuppetTwitteraccount presented as that of a blackgayman, criticizingJoe Bidenand praising Trump while systematically emphasizing his race and sexual orientation. In October 2020, aClemson Universitysocial media researcher identified "more than two dozen of Twitter accounts claiming to be black Trump supporters who gained hundreds of thousands of likes and retweets in a span of just a few days, sparking major doubts about their identities," many using photos of black men from news reports or stock images "including one in which the text 'black man photo' was still watermarked on the image".[39] As an example ofstate-sponsored Internet sockpuppetry, in 2011, a US company calledNtrepidwas awarded a $2.76 million contract fromU.S. Central Commandfor "online persona management" operations[40]to create "fake online personas to influence net conversations and spread U.S. propaganda" in Arabic, Persian, Urdu and Pashto[40]as part ofOperation Earnest Voice. On September 11, 2014, a number of sockpuppet accounts reported an explosion at a chemical plant in Louisiana. The reports came on a range of media, including Twitter and YouTube, but U.S. authorities claimed the entire event to be a hoax. The information was determined by many to have originated with a Russian government-sponsored sockpuppet management office in Saint Petersburg, called theInternet Research Agency.[41]Russia was again implicated by the U.S. intelligence community in 2016 for hiring trolls in the2016 United States presidential election.[42] TheInstitute of Economic Affairsclaimed in a 2012 paper that the United Kingdom government and the European Union fund charities that campaign and lobby for causes the government supports. In one example, 73% of responses to a government consultation were the direct result of campaigns by alleged "sockpuppet" organizations.[43]
https://en.wikipedia.org/wiki/Sock_puppet_account
Inmathematics, arandom walk, sometimes known as adrunkard's walk, is astochastic processthat describes a path that consists of a succession ofrandomsteps on somemathematical space. An elementary example of a random walk is the random walk on the integer number lineZ{\displaystyle \mathbb {Z} }which starts at 0, and at each step moves +1 or −1 with equalprobability. Other examples include the path traced by amoleculeas it travels in a liquid or a gas (seeBrownian motion), the search path of aforaginganimal, or the price of a fluctuatingstockand the financial status of agambler. Random walks have applications toengineeringand many scientific fields includingecology,psychology,computer science,physics,chemistry,biology,economics, andsociology. The termrandom walkwas first introduced byKarl Pearsonin 1905.[1] Realizations of random walks can be obtained byMonte Carlo simulation.[2] A popular random walk model is that of a random walk on a regular lattice, where at each step the location jumps to another site according to some probability distribution. In asimple random walk, the location can only jump to neighboring sites of the lattice, forming alattice path. In asimple symmetric random walkon a locally finite lattice, the probabilities of the location jumping to each one of its immediate neighbors are the same. The best-studied example is the random walk on thed-dimensional integer lattice (sometimes called the hypercubic lattice)Zd{\displaystyle \mathbb {Z} ^{d}}.[3] If the state space is limited to finite dimensions, the random walk model is called asimple bordered symmetric random walk, and the transition probabilities depend on the location of the state because on margin and corner states the movement is limited.[4] An elementary example of a random walk is the random walk on theintegernumber line,Z{\displaystyle \mathbb {Z} }, which starts at 0 and at each step moves +1 or −1 with equal probability. This walk can be illustrated as follows. A marker is placed at zero on the number line, and a fair coin is flipped. If it lands on heads, the marker is moved one unit to the right. If it lands on tails, the marker is moved one unit to the left. After five flips, the marker could now be on -5, -3, -1, 1, 3, 5. With five flips, three heads and two tails, in any order, it will land on 1. There are 10 ways of landing on 1 (by flipping three heads and two tails), 10 ways of landing on −1 (by flipping three tails and two heads), 5 ways of landing on 3 (by flipping four heads and one tail), 5 ways of landing on −3 (by flipping four tails and one head), 1 way of landing on 5 (by flipping five heads), and 1 way of landing on −5 (by flipping five tails). See the figure below for an illustration of the possible outcomes of 5 flips. To define this walk formally, take independent random variablesZ1,Z2,…{\displaystyle Z_{1},Z_{2},\dots }, where each variable is either 1 or −1, with a 50% probability for either value, and setS0=0{\displaystyle S_{0}=0}andSn=∑j=1nZj.{\textstyle S_{n}=\sum _{j=1}^{n}Z_{j}.}Theseries{Sn}{\displaystyle \{S_{n}\}}is called thesimple random walk onZ{\displaystyle \mathbb {Z} }. This series (the sum of the sequence of −1s and 1s) gives the net distance walked, if each part of the walk is of length one. TheexpectationE(Sn){\displaystyle E(S_{n})}ofSn{\displaystyle S_{n}}is zero. That is, the mean of all coin flips approaches zero as the number of flips increases. This follows by the finite additivity property of expectation:E(Sn)=∑j=1nE(Zj)=0.{\displaystyle E(S_{n})=\sum _{j=1}^{n}E(Z_{j})=0.} A similar calculation, using the independence of the random variables and the fact thatE(Zn2)=1{\displaystyle E(Z_{n}^{2})=1}, shows that:E(Sn2)=∑i=1nE(Zi2)+2∑1≤i<j≤nE(ZiZj)=n.{\displaystyle E(S_{n}^{2})=\sum _{i=1}^{n}E(Z_{i}^{2})+2\sum _{1\leq i<j\leq n}E(Z_{i}Z_{j})=n.} This hints thatE(|Sn|){\displaystyle E(|S_{n}|)\,\!}, theexpectedtranslation distance afternsteps, should beof the order ofn{\displaystyle {\sqrt {n}}}.In fact,[5]limn→∞E(|Sn|)n=2π.{\displaystyle \lim _{n\to \infty }{\frac {E(|S_{n}|)}{\sqrt {n}}}={\sqrt {\frac {2}{\pi }}}.} To answer the question of how many times will a random walk cross a boundary line if permitted to continue walking forever, a simple random walk onZ{\displaystyle \mathbb {Z} }will cross every point an infinite number of times. This result has many names: thelevel-crossing phenomenon,recurrenceor thegambler's ruin. The reason for the last name is as follows: a gambler with a finite amount of money will eventually lose when playinga fair gameagainst a bank with an infinite amount of money. The gambler's money will perform a random walk, and it will reach zero at some point, and the game will be over. Ifaandbare positive integers, then the expected number of steps until a one-dimensional simple random walk starting at 0 first hitsbor −aisab. The probability that this walk will hitbbefore −aisa/(a+b){\displaystyle a/(a+b)}, which can be derived from the fact that simple random walk is amartingale. And these expectations and hitting probabilities can be computed inO(a+b){\displaystyle O(a+b)}in the general one-dimensional random walk Markov chain. Some of the results mentioned above can be derived from properties ofPascal's triangle. The number of different walks ofnsteps where each step is +1 or −1 is 2n. For the simple random walk, each of these walks is equally likely. In order forSnto be equal to a numberkit is necessary and sufficient that the number of +1 in the walk exceeds those of −1 byk. It follows +1 must appear (n+k)/2 times amongnsteps of a walk, hence the number of walks which satisfySn=k{\displaystyle S_{n}=k}equals the number of ways of choosing (n+k)/2 elements from annelement set,[6]denoted(n(n+k)/2){\textstyle n \choose (n+k)/2}. For this to have meaning, it is necessary thatn+kbe an even number, which impliesnandkare either both even or both odd. Therefore, the probability thatSn=k{\displaystyle S_{n}=k}is equal to2−n(n(n+k)/2){\textstyle 2^{-n}{n \choose (n+k)/2}}. By representing entries of Pascal's triangle in terms offactorialsand usingStirling's formula, one can obtain good estimates for these probabilities for large values ofn{\displaystyle n}. If space is confined toZ{\displaystyle \mathbb {Z} }+ for brevity, the number of ways in which a random walk will land on any given number having five flips can be shown as {0,5,0,4,0,1}. This relation with Pascal's triangle is demonstrated for small values ofn. At zero turns, the only possibility will be to remain at zero. However, at one turn, there is one chance of landing on −1 or one chance of landing on 1. At two turns, a marker at 1 could move to 2 or back to zero. A marker at −1, could move to −2 or back to zero. Therefore, there is one chance of landing on −2, two chances of landing on zero, and one chance of landing on 2. Thecentral limit theoremand thelaw of the iterated logarithmdescribe important aspects of the behavior of simple random walks onZ{\displaystyle \mathbb {Z} }. In particular, the former entails that asnincreases, the probabilities (proportional to the numbers in each row) approach anormal distribution. To be precise, knowing thatP(Xn=k)=2−n(n(n+k)/2){\textstyle \mathbb {P} (X_{n}=k)=2^{-n}{\binom {n}{(n+k)/2}}}, and usingStirling's formulaone has log⁡P(Xn=k)=n[(1+kn+12n)log⁡(1+kn)+(1−kn+12n)log⁡(1−kn)]+log⁡2π+o(1).{\displaystyle {\log \mathbb {P} (X_{n}=k)}=n\left[\left({1+{\frac {k}{n}}+{\frac {1}{2n}}}\right)\log \left(1+{\frac {k}{n}}\right)+\left({1-{\frac {k}{n}}+{\frac {1}{2n}}}\right)\log \left(1-{\frac {k}{n}}\right)\right]+\log {\frac {\sqrt {2}}{\sqrt {\pi }}}+o(1).} Fixing the scalingk=⌊nx⌋{\textstyle k=\lfloor {\sqrt {n}}x\rfloor }, forx{\textstyle x}fixed, and using the expansionlog⁡(1+k/n)=k/n−k2/2n2+…{\textstyle \log(1+{k}/{n})=k/n-k^{2}/2n^{2}+\dots }whenk/n{\textstyle k/n}vanishes, it follows P(Xnn=⌊nx⌋n)=1n12πe−x2(1+o(1)).{\displaystyle {\mathbb {P} \left({\frac {X_{n}}{n}}={\frac {\lfloor {\sqrt {n}}x\rfloor }{\sqrt {n}}}\right)}={\frac {1}{\sqrt {n}}}{\frac {1}{2{\sqrt {\pi }}}}e^{-{x^{2}}}(1+o(1)).} taking the limit (and observing that1/n{\textstyle {1}/{\sqrt {n}}}corresponds to the spacing of the scaling grid) one finds the gaussian densityf(x)=12πe−x2{\textstyle f(x)={\frac {1}{2{\sqrt {\pi }}}}e^{-{x^{2}}}}. Indeed, for a absolutely continuous random variableX{\textstyle X}with densityfX{\textstyle f_{X}}it holdsP(X∈[x,x+dx))=fX(x)dx{\textstyle \mathbb {P} \left(X\in [x,x+dx)\right)=f_{X}(x)dx}, withdx{\textstyle dx}corresponding to an infinitesimal spacing. As a direct generalization, one can consider random walks on crystal lattices (infinite-fold abelian covering graphs over finite graphs). Actually it is possible to establish the central limit theorem and large deviation theorem in this setting.[7][8] A one-dimensionalrandom walkcan also be looked at as aMarkov chainwhose state space is given by the integersi=0,±1,±2,….{\displaystyle i=0,\pm 1,\pm 2,\dots .}For some numberpsatisfying0<p<1{\displaystyle \,0<p<1}, the transition probabilities (the probabilityPi,jof moving from stateito statej) are given byPi,i+1=p=1−Pi,i−1.{\displaystyle \,P_{i,i+1}=p=1-P_{i,i-1}.} The heterogeneous random walk draws in each time step a random number that determines the local jumping probabilities and then a random number that determines the actual jump direction. The main question is the probability of staying in each of the various sites aftert{\displaystyle t}jumps, and in the limit of this probability whent{\displaystyle t}is very large. In higher dimensions, the set of randomly walked points has interesting geometric properties. In fact, one gets a discretefractal, that is, a set which exhibits stochasticself-similarityon large scales. On small scales, one can observe "jaggedness" resulting from the grid on which the walk is performed. The trajectory of a random walk is the collection of points visited, considered as a set with disregard towhenthe walk arrived at the point. In one dimension, the trajectory is simply all points between the minimum height and the maximum height the walk achieved (both are, on average, on the order ofn{\displaystyle {\sqrt {n}}}). To visualize the two-dimensional case, one can imagine a person walking randomly around a city. The city is effectively infinite and arranged in a square grid of sidewalks. At every intersection, the person randomly chooses one of the four possible routes (including the one originally travelled from). Formally, this is a random walk on the set of all points in theplanewithintegercoordinates. To answer the question of the person ever getting back to the original starting point of the walk, this is the 2-dimensional equivalent of the level-crossing problem discussed above. In 1921George Pólyaproved that the personalmost surelywould in a 2-dimensional random walk, but for 3 dimensions or higher, the probability of returning to the origin decreases as the number of dimensions increases. In 3 dimensions, the probability decreases to roughly 34%.[9]The mathematicianShizuo Kakutaniwas known to refer to this result with the following quote: "A drunk man will find his way home, but a drunk bird may get lost forever".[10] The probability of recurrence is in generalp=1−(1πd∫[−π,π]d∏i=1ddθi1−1d∑i=1dcos⁡θi)−1{\displaystyle p=1-\left({\frac {1}{\pi ^{d}}}\int _{[-\pi ,\pi ]^{d}}{\frac {\prod _{i=1}^{d}d\theta _{i}}{1-{\frac {1}{d}}\sum _{i=1}^{d}\cos \theta _{i}}}\right)^{-1}}, which can be derived bygenerating functions[11]or Poisson process.[12] Another variation of this question which was also asked by Pólya is: "if two people leave the same starting point, then will they ever meet again?"[13]It can be shown that the difference between their locations (two independent random walks) is also a simple random walk, so they almost surely meet again in a 2-dimensional walk, but for 3 dimensions and higher the probability decreases with the number of the dimensions.Paul Erdősand Samuel James Taylor also showed in 1960 that for dimensions less or equal than 4, two independent random walks starting from any two given points have infinitely many intersections almost surely, but for dimensions higher than 5, they almost surely intersect only finitely often.[14] The asymptotic function for a two-dimensional random walk as the number of steps increases is given by aRayleigh distribution. The probability distribution is a function of the radius from the origin and the step length is constant for each step. Here, the step length is assumed to be 1, N is the total number of steps and r is the radius from the origin.[15] P(r)=2rNe−r2/N{\displaystyle P(r)={\frac {2r}{N}}e^{-r^{2}/N}} AWiener processis a stochastic process with similar behavior toBrownian motion, the physical phenomenon of a minute particle diffusing in a fluid. (Sometimes theWiener processis called "Brownian motion", although this is strictly speaking a confusion of a model with the phenomenon being modeled.) A Wiener process is thescaling limitof random walk in dimension 1. This means that if there is a random walk with very small steps, there is an approximation to a Wiener process (and, less accurately, to Brownian motion). To be more precise, if the step size is ε, one needs to take a walk of lengthL/ε2to approximate a Wiener length ofL. As the step size tends to 0 (and the number of steps increases proportionally), random walk converges to a Wiener process in an appropriate sense. Formally, ifBis the space of all paths of lengthLwith the maximum topology, and ifMis the space of measure overBwith the norm topology, then the convergence is in the spaceM. Similarly, a Wiener process in several dimensions is the scaling limit of random walk in the same number of dimensions. A random walk is a discrete fractal (a function with integer dimensions; 1, 2, ...), but a Wiener process trajectory is a true fractal, and there is a connection between the two. For example, take a random walk until it hits a circle of radiusrtimes the step length. The average number of steps it performs isr2.[citation needed]This fact is thediscrete versionof the fact that a Wiener process walk is a fractal ofHausdorff dimension2.[citation needed] In two dimensions, the average number of points the same random walk has on theboundaryof its trajectory isr4/3. This corresponds to the fact that the boundary of the trajectory of a Wiener process is a fractal of dimension 4/3, a fact predicted byMandelbrotusing simulations but proved only in 2000 byLawler,SchrammandWerner.[16] A Wiener process enjoys manysymmetriesa random walk does not. For example, a Wiener process walk is invariant to rotations, but the random walk is not, since the underlying grid is not (random walk is invariant to rotations by 90 degrees, but Wiener processes are invariant to rotations by, for example, 17 degrees too). This means that in many cases, problems on a random walk are easier to solve by translating them to a Wiener process, solving the problem there, and then translating back. On the other hand, some problems are easier to solve with random walks due to its discrete nature. Random walk andWiener processcan becoupled, namely manifested on the same probability space in a dependent way that forces them to be quite close. The simplest such coupling is theSkorokhod embedding, but there exist more precise couplings, such asKomlós–Major–Tusnády approximationtheorem. The convergence of a random walk toward the Wiener process is controlled by thecentral limit theorem, and byDonsker's theorem. For a particle in a known fixed position att= 0, the central limit theorem tells us that after a large number ofindependentsteps in the random walk, the walker's position is distributed according to anormal distributionof totalvariance: σ2=tδtε2,{\displaystyle \sigma ^{2}={\frac {t}{\delta t}}\,\varepsilon ^{2},} wheretis the time elapsed since the start of the random walk,ε{\displaystyle \varepsilon }is the size of a step of the random walk, andδt{\displaystyle \delta t}is the time elapsed between two successive steps. This corresponds to theGreen's functionof thediffusion equationthat controls the Wiener process, which suggests that, after a large number of steps, the random walk converges toward a Wiener process. In 3D, the variance corresponding to theGreen's functionof the diffusion equation is:σ2=6Dt.{\displaystyle \sigma ^{2}=6\,D\,t.} By equalizing this quantity with the variance associated to the position of the random walker, one obtains the equivalent diffusion coefficient to be considered for the asymptotic Wiener process toward which the random walk converges after a large number of steps:D=ε26δt{\displaystyle D={\frac {\varepsilon ^{2}}{6\delta t}}}(valid only in 3D). The two expressions of the variance above correspond to the distribution associated to the vectorR→{\displaystyle {\vec {R}}}that links the two ends of the random walk, in 3D. The variance associated to each componentRx{\displaystyle R_{x}},Ry{\displaystyle R_{y}}orRz{\displaystyle R_{z}}is only one third of this value (still in 3D). For 2D:[17] D=ε24δt.{\displaystyle D={\frac {\varepsilon ^{2}}{4\delta t}}.} For 1D:[18] D=ε22δt.{\displaystyle D={\frac {\varepsilon ^{2}}{2\delta t}}.} A random walk having a step size that varies according to anormal distributionis used as a model for real-world time series data such as financial markets. Here, the step size is the inverse cumulative normal distributionΦ−1(z,μ,σ){\displaystyle \Phi ^{-1}(z,\mu ,\sigma )}where 0 ≤z≤ 1 is a uniformly distributed random number, and μ and σ are the mean and standard deviations of the normal distribution, respectively. If μ is nonzero, the random walk will vary about a linear trend. If vsis the starting value of the random walk, the expected value afternsteps will be vs+nμ. For the special case where μ is equal to zero, afternsteps, the translation distance's probability distribution is given byN(0,nσ2), whereN() is the notation for the normal distribution,nis the number of steps, and σ is from the inverse cumulative normal distribution as given above. Proof: The Gaussian random walk can be thought of as the sum of a sequence of independent and identically distributed random variables, Xifrom the inverse cumulative normal distribution with mean equal zero and σ of the original inverse cumulative normal distribution: but we have the distribution for the sum of two independent normally distributed random variables,Z=X+Y, is given byN(μX+μY,σX2+σY2){\displaystyle {\mathcal {N}}(\mu _{X}+\mu _{Y},\sigma _{X}^{2}+\sigma _{Y}^{2})}(see here). In our case,μX= μY= 0andσ2X= σ2Y= σ2yieldN(0,2σ2){\displaystyle {\mathcal {N}}(0,2\sigma ^{2})}By induction, fornsteps we haveZ∼N(0,nσ2).{\displaystyle Z\sim {\mathcal {N}}(0,n\sigma ^{2}).}For steps distributed according to any distribution with zero mean and a finite variance (not necessarily just a normal distribution), theroot mean squaretranslation distance afternsteps is (seeBienaymé's identity) But for the Gaussian random walk, this is just the standard deviation of the translation distance's distribution afternsteps. Hence, if μ is equal to zero, and since the root mean square(RMS) translation distance is one standard deviation, there is 68.27% probability that the RMS translation distance afternsteps will fall between±σn{\displaystyle \pm \sigma {\sqrt {n}}}. Likewise, there is 50% probability that the translation distance afternsteps will fall between±0.6745σn{\displaystyle \pm 0.6745\sigma {\sqrt {n}}}. The number of distinct sites visited by a single random walkerS(t){\displaystyle S(t)}has been studied extensively for square and cubic lattices and for fractals.[19][20]This quantity is useful for the analysis of problems of trapping and kinetic reactions. It is also related to the vibrational density of states,[21][22]diffusion reactions processes[23]and spread of populations in ecology.[24][25] Theinformation rateof a Gaussian random walk with respect to the squared error distance, i.e. its quadraticrate distortion function, is given parametrically by[26]R(Dθ)=12∫01max{0,log2⁡(S(φ)/θ)}dφ,{\displaystyle R(D_{\theta })={\frac {1}{2}}\int _{0}^{1}\max\{0,\log _{2}\left(S(\varphi )/\theta \right)\}\,d\varphi ,}Dθ=∫01min{S(φ),θ}dφ,{\displaystyle D_{\theta }=\int _{0}^{1}\min\{S(\varphi ),\theta \}\,d\varphi ,}whereS(φ)=(2sin⁡(πφ/2))−2{\displaystyle S(\varphi )=\left(2\sin(\pi \varphi /2)\right)^{-2}}. Therefore, it is impossible to encode{Zn}n=1N{\displaystyle {\{Z_{n}\}_{n=1}^{N}}}using abinary codeof less thanNR(Dθ){\displaystyle NR(D_{\theta })}bitsand recover it with expected mean squared error less thanDθ{\displaystyle D_{\theta }}. On the other hand, for anyε>0{\displaystyle \varepsilon >0}, there exists anN∈N{\displaystyle N\in \mathbb {N} }large enough and abinary codeof no more than2NR(Dθ){\displaystyle 2^{NR(D_{\theta })}}distinct elements such that the expected mean squared error in recovering{Zn}n=1N{\displaystyle {\{Z_{n}\}_{n=1}^{N}}}from this code is at mostDθ−ε{\displaystyle D_{\theta }-\varepsilon }. As mentioned, the range of natural phenomena which have been subject to attempts at description by some flavour of random walks is considerable. This is particularly the case in the fields of physics,[27][28]chemistry,[29]materials science,[30][31]and biology.[32][33][34] The following are some specific applications of random walks: A number of types ofstochastic processeshave been considered that are similar to the pure random walks but where the simple structure is allowed to be more generalized. Thepurestructure can be characterized by the steps being defined byindependent and identically distributed random variables. Random walks can take place on a variety of spaces, such asgraphs, the integers, the real line, the plane or higher-dimensional vector spaces, oncurved surfacesor higher-dimensionalRiemannian manifolds, and ongroups. It is also possible to define random walks which take their steps at random times, and in that case, the positionXthas to be defined for all timest∈ [0, +∞). Specific cases or limits of random walks include theLévy flightanddiffusionmodels such asBrownian motion. A random walk of lengthkon a possibly infinitegraphGwith a root0is a stochastic process with random variablesX1,X2,…,Xk{\displaystyle X_{1},X_{2},\dots ,X_{k}}such thatX1=0{\displaystyle X_{1}=0}andXi+1{\displaystyle {X_{i+1}}}is a vertex chosen uniformly at random from the neighbors ofXi{\displaystyle X_{i}}. Then the numberpv,w,k(G){\displaystyle p_{v,w,k}(G)}is the probability that a random walk of lengthkstarting atvends atw. In particular, ifGis a graph with root0,p0,0,2k{\displaystyle p_{0,0,2k}}is the probability that a2k{\displaystyle 2k}-step random walk returns to0. Building on the analogy from the earlier section on higher dimensions, assume now that our city is no longer a perfect square grid. When our person reaches a certain junction, he picks between the variously available roads with equal probability. Thus, if the junction has seven exits the person will go to each one with probability one-seventh. This is a random walk on a graph. Will our person reach his home? It turns out that under rather mild conditions, the answer is still yes,[45]but depending on the graph, the answer to the variant question 'Will two persons meet again?' may not be that they meet infinitely often almost surely.[46] An example of a case where the person will reach his home almost surely is when the lengths of all the blocks are betweenaandb(whereaandbare any two finite positive numbers). Notice that we do not assume that the graph isplanar, i.e. the city may contain tunnels and bridges. One way to prove this result is using the connection toelectrical networks. Take a map of the city and place a oneohmresistoron every block. Now measure the "resistance between a point and infinity". In other words, choose some numberRand take all the points in the electrical network with distance bigger thanRfrom our point and wire them together. This is now a finite electrical network, and we may measure the resistance from our point to the wired points. TakeRto infinity. The limit is called theresistance between a point and infinity. It turns out that the following is true (an elementary proof can be found in the book by Doyle and Snell): Theorem:a graph is transient if and only if the resistance between a point and infinity is finite. It is not important which point is chosen if the graph is connected. In other words, in a transient system, one only needs to overcome a finite resistance to get to infinity from any point. In a recurrent system, the resistance from any point to infinity is infinite. This characterization oftransience and recurrenceis very useful, and specifically it allows us to analyze the case of a city drawn in the plane with the distances bounded. A random walk on a graph is a very special case of aMarkov chain. Unlike a general Markov chain, random walk on a graph enjoys a property calledtime symmetryorreversibility. Roughly speaking, this property, also called the principle ofdetailed balance, means that the probabilities to traverse a given path in one direction or the other have a very simple connection between them (if the graph isregular, they are just equal). This property has important consequences. Starting in the 1980s, much research has gone into connecting properties of the graph to random walks. In addition to the electrical network connection described above, there are important connections toisoperimetric inequalities, see morehere, functional inequalities such asSobolevandPoincaréinequalities and properties of solutions ofLaplace's equation. A significant portion of this research was focused onCayley graphsoffinitely generated groups. In many cases these discrete results carry over to, or are derived frommanifoldsandLie groups. In the context ofrandom graphs, particularly that of theErdős–Rényi model, analytical results to some properties of random walkers have been obtained. These include the distribution of first[47]and last hitting times[48]of the walker, where the first hitting time is given by the first time the walker steps into a previously visited site of the graph, and the last hitting time corresponds the first time the walker cannot perform an additional move without revisiting a previously visited site. A good reference for random walk on graphs is the online book byAldous and Fill. For groups see the book of Woess. If the transition kernelp(x,y){\displaystyle p(x,y)}is itself random (based on an environmentω{\displaystyle \omega }) then the random walk is called a "random walk in random environment". When the law of the random walk includes the randomness ofω{\displaystyle \omega }, the law is called the annealed law; on the other hand, ifω{\displaystyle \omega }is seen as fixed, the law is called a quenched law. See the book of Hughes, the book of Revesz, or the lecture notes of Zeitouni. We can think about choosing every possible edge with the same probability as maximizing uncertainty (entropy) locally. We could also do it globally – in maximal entropy random walk (MERW) we want all paths to be equally probable, or in other words: for every two vertexes, each path of given length is equally probable.[49]This random walk has much stronger localization properties. There are a number of interesting models of random paths in which each step depends on the past in a complicated manner. All are more complex for solving analytically than the usual random walk; still, the behavior of any model of a random walker is obtainable using computers. Examples include: The self-avoiding walk of lengthnonZd{\displaystyle \mathbb {Z} ^{d}}is the randomn-step path which starts at the origin, makes transitions only between adjacent sites inZd{\displaystyle \mathbb {Z} ^{d}}, never revisit a site, and is chosen uniformly among all such paths. In two dimensions, due to self-trapping, a typical self-avoiding walk is very short,[51]while in higher dimension it grows beyond all bounds. This model has often been used inpolymer physics(since the 1960s). Random walk chosen to maximizeentropy rate, has much stronger localization properties. Random walks where the direction of movement at one time iscorrelatedwith the direction of movement at the next time. It is used to model animal movements.[56][57]
https://en.wikipedia.org/wiki/Random_walk
Anintegrated collaboration environment(ICE) is an environment in which avirtual teamdoes its work. Such environments allow companies to realize a number of competitive advantages by using their existing computers and network infrastructure for group and personal collaboration. These fully featured environments combine the best features of web-based conferencing and collaboration, desktop videoconferencing, and instant message into a single easy-to-use, intuitive environment. Recent developments have allowed companies include streaming in real-time and archived modes into their ICE. Common applications found within ICE are: ICE allows organizations to take advantage of technological advances in computer processing power and video technology while maintaining backward compatibility with existing standards-based hardware conference equipment. ICE can reduce costs for a company. These benefits are achieved through cross discipline fertilization, which allowsknowledge workersto share information across departments of a company, which can be important for ensuring that corporate goals are shared and fully integrated. There can be challenges to implementing ICE due to employees' lack of acceptance of knowledge management systems. Studies have shown that that lack of commitment and motivation by knowledge workers, professionals, and managers is the reason for problems, not the knowledge management technologies. Possible reasons for the lack of acceptance include:
https://en.wikipedia.org/wiki/Integrated_collaboration_environment
In mathematics,stochastic geometryis the study of random spatial patterns. At the heart of the subject lies the study of random point patterns. This leads to the theory ofspatial point processes, hence notions of Palm conditioning, which extend to the more abstract setting ofrandom measures. There are various models for point processes, typically based on but going beyond the classic homogeneousPoisson point process(the basic model forcomplete spatial randomness) to find expressive models which allow effective statistical methods. The point pattern theory provides a major building block for generation of random object processes, allowing construction of elaborate random spatial patterns. The simplest version, theBoolean model, places a random compact object at each point of a Poisson point process. More complex versions allow interactions based in various ways on the geometry of objects. Different directions of application include: the production of models for random images either as set-union of objects, or as patterns of overlapping objects; also the generation of geometrically inspired models for the underlying point process (for example, the point pattern distribution may be biased by an exponential factor involving the area of the union of the objects; this is related to the Widom–Rowlinson model[1]of statistical mechanics). What is meant by a random object? A complete answer to this question requires the theory ofrandom closed sets, which makes contact with advanced concepts from measure theory. The key idea is to focus on the probabilities of the given random closed set hitting specified test sets. There arise questions of inference (for example, estimate the set which encloses a given point pattern) and theories of generalizations of means etc. to apply to random sets. Connections are now being made between this latter work and recent developments in geometric mathematical analysis concerning general metric spaces and their geometry. Good parametrizations of specific random sets can allow us to refer random object processes to the theory of marked point processes; object-point pairs are viewed as points in a larger product space formed as the product of the original space and the space of parametrization. Suppose we are concerned no longer with compact objects, but with objects which are spatially extended: lines on the plane or flats in 3-space. This leads to consideration of line processes, and of processes of flats or hyper-flats. There can no longer be a preferred spatial location for each object; however the theory may be mapped back into point process theory by representing each object by a point in a suitable representation space. For example, in the case of directed lines in the plane one may take the representation space to be a cylinder. A complication is that the Euclidean motion symmetries will then be expressed on the representation space in a somewhat unusual way. Moreover, calculations need to take account of interesting spatial biases (for example, line segments are less likely to be hit by random lines to which they are nearly parallel) and this provides an interesting and significant connection to the hugely significant area ofstereology, which in some respects can be viewed as yet another theme of stochastic geometry. It is often the case that calculations are best carried out in terms of bundles of lines hitting various test-sets, rather than by working in representation space. Line and hyper-flat processes have their own direct applications, but also find application as one way of creatingtessellationsdividing space; hence for example one may speak of Poisson line tessellations. A notable result[2]proves that the cell at the origin of the Poisson line tessellation is approximately circular when conditioned to be large. Tessellations in stochastic geometry can of course be produced by other means, for example by usingVoronoiand variant constructions, and also by iterating various means of construction. The name appears to have been coined byDavid KendallandKlaus Krickeberg[3]while preparing for a June 1969Oberwolfachworkshop, though antecedents for the theory stretch back much further under the namegeometric probability. The term "stochastic geometry" was also used by Frisch andHammersleyin 1963[4]as one of two suggestions for names of a theory of "random irregular structures" inspired bypercolation theory. This brief description has focused on the theory[3][5]of stochastic geometry, which allows a view of the structure of the subject. However, much of the life and interest of the subject, and indeed many of its original ideas, flow from a very wide range of applications, for example: astronomy,[6]spatially distributed telecommunications,[7]wireless network modeling and analysis,[8]modeling ofchannel fading,[9][10]forestry,[11]the statistical theory of shape,[12]material science,[13]multivariate analysis, problems inimage analysis[14]andstereology. There are links to statistical mechanics,[15]Markov chain Monte Carlo, and implementations of the theory in statistical computing (for example, spatstat[16]inR). Most recently determinantal and permanental point processes (connected to random matrix theory) are beginning to play a role.[17]
https://en.wikipedia.org/wiki/Stochastic_geometry
Aregular expression(shortened asregexorregexp),[1]sometimes referred to asrational expression,[2][3]is a sequence ofcharactersthat specifies amatch patternintext. Usually such patterns are used bystring-searching algorithmsfor "find" or "find and replace" operations onstrings, or forinput validation. Regular expression techniques are developed intheoretical computer scienceandformal languagetheory. The concept of regular expressions began in the 1950s, when the American mathematicianStephen Cole Kleeneformalized the concept of aregular language. They came into common use withUnixtext-processing utilities. Differentsyntaxesfor writing regular expressions have existed since the 1980s, one being thePOSIXstandard and another, widely used, being thePerlsyntax. Regular expressions are used insearch engines, in search and replace dialogs ofword processorsandtext editors, intext processingutilities such assedandAWK, and inlexical analysis. Regular expressions are supported in many programming languages. Library implementations are often called an "engine",[4][5]andmany of theseare available for reuse. Regular expressions originated in 1951, when mathematicianStephen Cole Kleenedescribedregular languagesusing his mathematical notation calledregular events.[6][7]These arose intheoretical computer science, in the subfields ofautomata theory(models of computation) and the description and classification offormal languages, motivated by Kleene's attempt to describe earlyartificial neural networks. (Kleene introduced it as an alternative toMcCulloch & Pitts's"prehensible", but admitted "We would welcome any suggestions as to a more descriptive term."[8]) Other early implementations ofpattern matchinginclude theSNOBOLlanguage, which did not use regular expressions, but instead its own pattern matching constructs. Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor[9]and lexical analysis in a compiler.[10]Among the first appearances of regular expressions in program form was whenKen Thompsonbuilt Kleene's notation into the editorQEDas a means to match patterns intext files.[9][11][12][13]For speed, Thompson implemented regular expression matching byjust-in-time compilation(JIT) toIBM 7094code on theCompatible Time-Sharing System, an important early example of JIT compilation.[14]He later added this capability to the Unix editored, which eventually led to the popular search toolgrep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor:g/re/pmeaning "Global search for Regular Expression and Print matching lines").[15]Around the same time when Thompson developed QED, a group of researchers includingDouglas T. Rossimplemented a tool based on regular expressions that is used for lexical analysis incompilerdesign.[10] Many variations of these original forms of regular expressions were used inUnix[13]programs atBell Labsin the 1970s, includinglex,sed,AWK, andexpr, and in other programs such asvi, andEmacs(which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in thePOSIX.2standard in 1992. In the 1980s, the more complicated regexes arose inPerl, which originally derived from a regex library written byHenry Spencer(1986), who later wrote an implementation forTclcalledAdvanced Regular Expressions.[16]The Tcl library is a hybridNFA/DFAimplementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation includePostgreSQL.[17]Perl later expanded on Spencer's original library to add many new features.[18]Part of the effort in the design ofRaku(formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition ofparsing expression grammars.[19]The result is amini-languagecalledRaku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allowBNF-style definition of arecursive descent parservia sub-rules. The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards likeISO SGML(precursored by ANSI "GCA 101-1983") consolidated. The kernel of thestructure specification languagestandards consists of regexes. Its use is evident in theDTDelement group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in theglobsyntax for filenames, and in theSQLLIKEoperator. Starting in 1997,Philip HazeldevelopedPCRE(Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools includingPHPandApache HTTP Server.[20] Today, regexes are widely supported in programming languages, text processing programs (particularlylexers), advanced text editors, and some other programs. Regex support is part of thestandard libraryof many programming languages, includingJavaandPython, and is built into the syntax of others, including Perl andECMAScript. In the late 2010s, several companies started to offer hardware,FPGA,[21]GPU[22]implementations ofPCREcompatible regex engines that are faster compared toCPUimplementations. The phraseregular expressions, orregexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either ametacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regexb., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example,.is a very general pattern,[a-z](match all lower case letters from 'a' to 'z') is less general andbis a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standardASCIIkeyboard. A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in atext editor, the regular expressionseriali[sz]ematches both "serialise" and "serialize".Wildcard charactersalso achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base. The usual context of wildcard characters is inglobbingsimilar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex^[ \t]+|[ \t]+$matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is[+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?. Aregex processortranslates a regular expression in the above syntax into an internal representation that can be executed and matched against astringrepresenting the text being searched in. One possible approach is theThompson's construction algorithmto construct anondeterministic finite automaton(NFA), which is thenmade deterministicand the resultingdeterministic finite automaton(DFA) is run on the target text string to recognize substrings that match the regular expression. The picture shows the NFA schemeN(s*)obtained from the regular expressions*, wheresdenotes a simpler regular expression in turn, which has already beenrecursivelytranslated to the NFAN(s). A regular expression, often called apattern, specifies asetof strings required for a particular purpose. A simple way to specify a finite set of strings is to list itselementsor members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the patternH(ä|ae?)ndel; we say that this patternmatcheseach of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example,(Hän|Han|Haen)delalso specifies the same set of three strings in this example. Most formalisms provide the following operations to construct regular expressions. These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. The precisesyntaxfor regular expressions varies among tools and with context; more detail is given in§ Syntax. Regular expressions describeregular languagesinformal language theory. They have the same expressive power asregular grammars. Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory.[24][25]Given a finitealphabetΣ, the following constants are defined as regular expressions: Given regular expressions R and S, the following operations over them are defined to produce regular expressions: To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example,(ab)ccan be written asabc, anda|(b(c*))can be written asa|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar. Examples: The formal definition of regular expressions is minimal on purpose, and avoids defining?and+—these can be expressed as follows:a+=aa*, anda?=(a|ε). Sometimes thecomplementoperator is added, to give ageneralized regular expression; hereRcmatches all strings over Σ* that do not matchR. In principle, the complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause adouble exponentialblow-up of its length.[26][27][28] Regular expressions in this sense can express the regular languages, exactly the class of languages accepted bydeterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size growsexponentiallyin the size of the shortest equivalent regular expressions. The standard example here is the languagesLkconsisting of all strings over the alphabet {a,b} whosekth-from-last letter equalsa. On the one hand, a regular expression describingL4is given by(a∣b)∗a(a∣b)(a∣b)(a∣b){\displaystyle (a\mid b)^{*}a(a\mid b)(a\mid b)(a\mid b)}. Generalizing this pattern toLkgives the expression: On the other hand, it is known that every deterministic finite automaton accepting the languageLkmust have at least 2kstates. Luckily, there is a simple mapping from regular expressions to the more generalnondeterministic finite automata(NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3grammarsof theChomsky hierarchy.[24] In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a givenISBNrequires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file .[29] Given a regular expression,Thompson's construction algorithmcomputes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved byKleene's algorithm. Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implementregexes. Seebelowfor more on this. As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results. It is possible to write analgorithmthat, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to aminimal deterministic finite state machine, and determines whether they areisomorphic(equivalent). Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y)∗and (X∗Y∗)∗denote the same regular language, for all regular expressionsX,Y, it is necessary and sufficient to check whether the particular regular expressions (a+b)∗and (a∗b∗)∗denote the same language over the alphabet Σ={a,b}. More generally, an equationE=Fbetween regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds.[30][31] Every regular expression can be written solely in terms of theKleene starandset unionsover finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to thestar height problem. In 1991,Dexter Kozenaxiomatized regular expressions as aKleene algebra, using equational andHorn clauseaxioms.[32]Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages.[33] A regexpatternmatches a targetstring. The pattern is composed of a sequence ofatoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using( )as metacharacters. Metacharacters help form:atoms;quantifierstelling how many atoms (and whether it is agreedyquantifieror not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities. Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have theirliteralcharacter meaning, depending on context, or whether they are "escaped", i.e. preceded by anescape sequence, in this case, the backslash\. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" orleaning toothpick syndrome, they have a metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters( )and{ }be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are{}[]()^$.|*+?and\. The usual characters that become metacharacters when escaped aredswDSWandN. When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regexreis entered as"re". However, they are often written with slashes asdelimiters, as in/re/for the regexre. This originates ined, where/is the editor command for searching, and an expression/re/can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famouslyg/re/pas ingrep("global regex print"), which is included in mostUnix-based operating systems, such asLinuxdistributions. A similar convention is used insed, where search and replace is given bys/re/replacement/and patterns can be joined with a comma to specify a range of lines as in/re1/,/re2/. This notation is particularly well known due to its use inPerl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the commands,/,X,will replace a/with anX, using commas as delimiters. TheIEEEPOSIXstandard has three sets of compliance:BRE(Basic Regular Expressions),[34]ERE(Extended Regular Expressions), andSRE(Simple Regular Expressions). SRE isdeprecated,[35]in favor of BRE, as both providebackward compatibility. The subsection below covering thecharacter classesapplies to both BRE and ERE. BRE and ERE work together. ERE adds?,+, and|, and it removes the need to escape the metacharacters( )and{ }, which arerequiredin BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example,GNUgrephas the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" forPerlregexes. Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs,( )and{ }are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includeslazy matching,backreferences, named capture groups, andrecursivepatterns. In thePOSIXstandard, Basic Regular Syntax (BRE) requires that themetacharacters( )and{ }be designated\(\)and\{\}, whereas Extended Regular Syntax (ERE) does not. The-character is treated as a literal character if it is the last or the first (after the^, if present) character within the brackets:[abc-],[-abc],[^-abc]. Backslash escapes are not allowed. The]character can be included in a bracket expression if it is the first (after the^, if present) character:[]abc],[^]abc]. Examples: According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define a "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax).[37] The meaning of metacharactersescapedwith a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example,\( \)is now( )and\{ \}is now{ }. Additionally, support is removed for\nbackreferences and the following metacharacters are added: Examples: POSIX Extended Regular Expressions can often be used with modern Unix utilities by including thecommand lineflag-E. The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example,[A-Z]could stand for any uppercase letter in the English alphabet, and\dcould mean any digit. Character classes apply to both POSIX levels. When specifying a range of characters, such as[a-Z](i.e. lowercaseato uppercaseZ), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could beabc...zABC...Z, oraAbBcC...zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table: POSIX character classes can only be used within bracket expressions. For example,[[:upper:]ab]matches the uppercase letters and lowercase "a" and "b". An additional non-POSIX class understood by some tools is[:word:], which is usually defined as[:alnum:]plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editorVimfurther distinguisheswordandword-headclasses (using the notation\wand\h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like\h\w*or[[:alpha:]_][[:alnum:]_]*in POSIX notation. Note that what the POSIX regex standards callcharacter classesare commonly referred to asPOSIX character classesin other regex flavors which support them. With most other regex flavors, the termcharacter classis used to describe what POSIX callsbracket expressions. Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar toPerl's—for example,Java,JavaScript,Julia,Python,Ruby,Qt, Microsoft's.NET Framework, andXML Schema. Some languages and tools such asBoostandPHPsupport multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python.[38] In Python and some other implementations (e.g. Java), the three common quantifiers (*,+and?) aregreedyby default because they match as many characters as possible.[39]The regex".+"(including the double-quotes) applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part,"Ganymede,". The aforementioned quantifiers may, however, be madelazyorminimalorreluctant, matching as few characters as possible, by appending a question mark:".+?"matches only"Ganymede,".[39] In Java and Python 3.11+,[40]quantifiers may be madepossessiveby appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed:[41]While the regex".*"applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line, the regex".*+"doesnot match at all, because.*+consumes the entire input, including the final". Thus, possessive quantifiers are most useful with negated character classes, e.g."[^"]*+", which matches"Ganymede,"when applied to the same string. Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is(?>group). For example, while^(wi|w)i$matches bothwiandwii,^(?>wi|w)i$only matcheswiibecause the engine is forbidden from backtracking and so cannot try setting the group to "w" after matching "wi".[42] Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime.[41] IETF RFC 9485 describes "I-Regexp: An Interoperable Regular Expression Format". It specifies a limited subset of regular-expression idioms designed to be interoperable, i.e. produce the same effect, in a large number of regular-expression libraries. I-Regexp is also limited to matching, i.e. providing a true or false match between a regular expression and a given piece of text. Thus, it lacks advanced features such as capture groups, lookahead, and backreferences.[43] Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds theregular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (backreferences). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", calledsquaresin formal language theory. The pattern for these strings is(.+)\1. The language of squares is not regular, nor is itcontext-free, due to thepumping lemma. However,pattern matchingwith an unbounded number of backreferences, as supported by numerous modern tools, is stillcontext sensitive.[44]The general problem of matching any number of backreferences isNP-complete, and the execution time for known algorithms grows exponentially by the number of backreference groups used.[45] However, many tools, libraries, and engines that provide such constructions still use the termregular expressionfor their patterns. This has led to a nomenclature where the term regular expression has different meanings informal language theoryand pattern matching. For this reason, some people have taken to using the termregex,regexp, or simplypatternto describe the latter.Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku: "Regular expressions" […] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).[19] Other features not found in describing regular languages include assertions. These include the ubiquitous^and$, used since at least 1970,[46]as well as some more sophisticated extensions like lookaround that appeared in 1994.[47]Lookarounds define the surrounding of a match and do not spill into the match itself, a feature only relevant for the use case of string searching.[citation needed]Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well.[48] Thelook-ahead assertions(?=...)and(?!...)have been attested since at least 1994, starting with Perl 5.[47]The lookbehind assertions(?<=...)and(?<!...)are attested since 1997 in a commit by Ilya Zakharevich to Perl 5.005.[49] There are at least three differentalgorithmsthat decide whether and how a given regex matches a string. The oldest and fastest relies on a result in formal language theory that allows everynondeterministic finite automaton(NFA) to be transformed into adeterministic finite automaton(DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of sizemhas the time and memory cost ofO(2m), but it can be run on a string of sizenin timeO(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded. An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises toO(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky.[50][51]Modern implementations include the re1-re2-sregex family based on Cox's code. The third algorithm is to match the pattern against the input string bybacktracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like(a|aa)*bthat contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem calledRegular expression Denial of Service(ReDoS). Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy.[52] Sublinear runtime algorithms have been achieved usingBoyer-Moore (BM) based algorithmsand related DFA optimization techniques such as the reverse scan.[53]GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wuagrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism.[54] A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of⁠O(n2k+2){\displaystyle {\mathrm {O} }(n^{2k+2})}⁠time and⁠O(n2k+1){\displaystyle {\mathrm {O} }(n^{2k+1})}⁠space for a haystack of length n and k backreferences in the RegExp.[55]A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps.[56] In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to useASCIIcharacters as their token set though regex libraries have supported numerous othercharacter sets. Many modern regex engines offer at least some support forUnicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode. Mostgeneral-purpose programming languagessupport regex capabilities, either natively or vialibraries. Regexes are useful in a wide variety of text processing tasks, and more generallystring processing, where the data need not be textual. Common applications includedata validation,data scraping(especiallyweb scraping),data wrangling, simpleparsing, the production ofsyntax highlightingsystems, and many other tasks. Some high-enddesktop publishingsoftware has the ability to use regexes to automatically apply text styling, saving the person doing the layout from laboriously doing this by hand for anything that can be matched by a regex. For example, by defining acharacter stylethat makes text intosmall capsand then using the regex[A-Z]{4,}to apply that style, any word of four or more consecutive capital letters will be automatically rendered as small caps instead. While regexes would be useful on Internetsearch engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions includeGoogle Code SearchandExalead. However, Google Code Search was shut down in January 2012.[59] The specific syntax rules vary depending on the specific implementation,programming language, orlibraryin use. Additionally, the functionality of regex implementations can vary betweenversions. Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation. This section provides a basic description of some of the properties of regexes by way of illustration. The following conventions are used in the examples.[60] These regexes are all Perl-like syntax. StandardPOSIXregular expressions are different. Unless otherwise indicated, the following examples conform to thePerlprogramming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex,\( \)vs.(), or lack of\dinstead ofPOSIX[:digit:]). The syntax and conventions used in these examples coincide with that of other programming environments as well.[61] Output: Output: Output: Output: Output: Output: Output: Output: Output: (^\w|\w$|\W\w|\w\W). Output: in Unicode,[58]where theAlphabeticproperty contains more than Latin letters, and theDecimal_Numberproperty contains more than Arab digits. Output: in Unicode. Output: Output: Output: Output: Output: Output: Output: Output: Output: Output: Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as theinduction of regular languagesand is part of the general problem ofgrammar inductionincomputational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of stringsnotin that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (seelanguage identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).
https://en.wikipedia.org/wiki/Regular_expression#Implementations_and_running_times
Hexadecimal(also known asbase-16or simplyhex) is apositional numeral systemthat represents numbers using aradix(base) of sixteen. Unlike thedecimalsystem representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9 and "A"–"F" to represent values from ten to fifteen. Software developers and system designers widely use hexadecimal numbers because they provide a convenient representation ofbinary-codedvalues. Each hexadecimal digit represents fourbits(binary digits), also known as anibble(or nybble).[1]For example, an 8-bitbyteis two hexadecimal digits and its value can be written as00toFFin hexadecimal. In mathematics, a subscript is typically used to specify the base. For example, the decimal value711would be expressed in hexadecimal as 2C716. In programming, several notations denote hexadecimal numbers, usually involving a prefix. The prefix0xis used inC, which would denote this value as0x2C7. Hexadecimal is used in the transfer encodingBase 16, in which each byte of theplain textis broken into two 4-bit values and represented by two hexadecimal digits. In most current use cases, the letters A–F or a–f represent the values 10–15, while thenumerals0–9 are used to represent their decimal values. There is no universal convention to use lowercase or uppercase, so each is prevalent or preferred in particular environments by community standards or convention; even mixed case is used. Someseven-segment displaysuse mixed-case 'A b C d E F' to distinguish the digits A–F from one another and from 0–9. There is some standardization of using spaces (rather than commas or another punctuation mark) to separate hex values in a long list. For instance, in the followinghex dump, each 8-bitbyteis a 2-digit hex number, with spaces between them, while the 32-bit offset at the start is an 8-digit hex number. In contexts where thebaseis not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously. A numerical subscript (itself written in decimal) can give the base explicitly: 15910is decimal 159; 15916is hexadecimal 159, which equals 34510. Some authors prefer a text subscript, such as 159decimaland 159hex, or 159dand 159h. Donald Knuthintroduced the use of a particular typeface to represent a particular radix in his bookThe TeXbook.[2]Hexadecimal representations are written there in atypewriter typeface:5A3,C1F27ED In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen: Sometimes the numbers are known to be Hex. The use of the lettersAthroughFto represent the digits above 9 was not universal in the early history of computers. Since there were no traditional numerals to represent the quantities from ten to fifteen, alphabetic letters were re-employed as a substitute. Most European languages lack non-decimal-based words for some of the numerals eleven to fifteen. Some people read hexadecimal numbers digit by digit, like a phone number, or using theNATO phonetic alphabet, theJoint Army/Navy Phonetic Alphabet, or a similarad-hocsystem. In the wake of the adoption of hexadecimal amongIBM System/360programmers, Magnuson (1968)[23]suggested a pronunciation guide that gave short names to the letters of hexadecimal – for instance, "A" was pronounced "ann", B "bet", C "chris", etc.[23]Another naming-system was published online by Rogers (2007)[24]that tries to make the verbal representation distinguishable in any case, even when the actual number does not contain numbers A–F. Examples are listed in the tables below. Yet another naming system was elaborated by Babb (2015), based on a joke inSilicon Valley.[25]The system proposed by Babb was further improved by Atkins-Bittner in 2015-2016.[26] Others have proposed using the verbal Morse Code conventions to express four-bit hexadecimal digits, with "dit" and "dah" representing zero and one, respectively, so that "0000" is voiced as "dit-dit-dit-dit" (....), dah-dit-dit-dah (-..-) voices the digit with a value of nine, and "dah-dah-dah-dah" (----) voices the hexadecimal digit for decimal 15. Systems of counting ondigitshave been devised for both binary and hexadecimal.Arthur C. Clarkesuggested using each finger as an on/off bit, allowing finger counting from zero to 102310on ten fingers.[27]Another system for counting up to FF16(25510) is illustrated on the right. The hexadecimal system can express negative numbers the same way as in decimal: −2A to represent −4210, −B01D9 to represent −72136910and so on. Hexadecimal can also be used to express the exact bit patterns used in theprocessor, so a sequence of hexadecimal digits may represent asignedor even afloating-pointvalue. This way, the negative number −4210can be written as FFFF FFD6 in a 32-bitCPU register(intwo's complement), as C228 0000 in a 32-bitFPUregister or C045 0000 0000 0000 in a 64-bit FPU register (in theIEEE floating-point standard). Just as decimal numbers can be represented inexponential notation, so too can hexadecimal numbers.P notationuses the letterP(orp, for "power"), whereasE(ore) serves a similar purpose in decimalE notation. The number after thePisdecimaland represents thebinaryexponent. Increasing the exponent by 1 multiplies by 2, not 16:20p0 = 10p1 = 8p2 = 4p3 = 2p4 = 1p5. Usually, the number is normalized so that the hexadecimal digits start with1.(zero is usually0with noP). Example:1.3DEp42represents1.3DE16× 24210. P notation is required by theIEEE 754-2008binary floating-point standard and can be used for floating-point literals in theC99edition of theC programming language.[28]Using the%aor%Aconversion specifiers, this notation can be produced by implementations of theprintffamily of functions following the C99 specification[29]andSingle Unix Specification(IEEE Std 1003.1)POSIXstandard.[30] Most computers manipulate binary data, but it is difficult for humans to work with a large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410). This example converts 11112to base ten. Since eachpositionin a binary numeral can contain either a 1 or a 0, its value may be easily determined by its position from the right: Therefore: With little practice, mapping 11112to F16in one step becomes easy (see table inwritten representation). The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4-digit groups and map each to a single hexadecimal digit.[31] This example shows the conversion of a binary number to decimal, mapping each digit to the decimal value, and adding the results. Compare this to the conversion to hexadecimal, where each group of four digits can be considered independently and converted directly: The conversion from hexadecimal to binary is equally direct.[31] Althoughquaternary(base 4) is little used, it can easily be converted to and from hexadecimal or binary. Each hexadecimal digit corresponds to a pair of quaternary digits, and each quaternary digit corresponds to a pair of binary digits. In the above example 2 5 C16= 02 11 304. Theoctal(base 8) system can also be converted with relative ease, although not quite as trivially as with bases 2 and 4. Each octal digit corresponds to three binary digits, rather than four. Therefore, we can convert between octal and hexadecimal via an intermediate conversion to binary followed by regrouping the binary digits in groups of either three or four. As with all bases there is a simplealgorithmfor converting a representation of a number to hexadecimal by doing integer division and remainder operations in the source base. In theory, this is possible from any base, but for most humans, only decimal and for most computers, only binary (which can be converted by far more efficient methods) can be easily handled with this method. Let d be the number to represent in hexadecimal, and the series hihi−1...h2h1be the hexadecimal digits representing the number. "16" may be replaced with any other base that may be desired. The following is aJavaScriptimplementation of the above algorithm for converting any number to a hexadecimal in String representation. Its purpose is to illustrate the above algorithm. To work with data seriously, however, it is much more advisable to work withbitwise operators. It is also possible to make the conversion by assigning each place in the source base the hexadecimal representation of its place value — before carrying out multiplication and addition to get the final representation. For example, to convert the number B3AD to decimal, one can split the hexadecimal number into its digits: B (1110), 3 (310), A (1010) and D (1310), and then get the final result by multiplying each decimal representation by 16p(pbeing the corresponding hex digit position, counting from right to left, beginning with 0). In this case, we have that: B3AD = (11 × 163) + (3 × 162) + (10 × 161) + (13 × 160) which is 45997 in base 10. Many computer systems provide a calculator utility capable of performing conversions between the various radices frequently including hexadecimal. InMicrosoft Windows, theCalculator, on its Programmer mode, allows conversions between hexadecimal and other common programming bases. Elementary operations such as division can be carried out indirectly through conversion to an alternatenumeral system, such as the commonly used decimal system or the binary system where each hex digit corresponds to four binary digits. Alternatively, one can also perform elementary operations directly within the hex system itself — by relying on its addition/multiplication tables and its corresponding standard algorithms such aslong divisionand the traditional subtraction algorithm. As with other numeral systems, the hexadecimal system can be used to representrational numbers, althoughrepeating expansionsare common since sixteen (1016) has only a single prime factor: two. For any base, 0.1 (or "1/10") is always equivalent to one divided by the representation of that base value in its own number system. Thus, whether dividing one by two forbinaryor dividing one by sixteen for hexadecimal, both of these fractions are written as0.1. Because the radix 16 is aperfect square(42), fractions expressed in hexadecimal have an odd period much more often than decimal ones, and there are nocyclic numbers(other than trivial single digits). Recurring digits are exhibited when the denominator in lowest terms has aprime factornot found in the radix; thus, when using hexadecimal notation, all fractions with denominators that are not apower of tworesult in an infinite string of recurring digits (such as thirds and fifths). This makes hexadecimal (and binary) less convenient thandecimalfor representing rational numbers since a larger proportion lies outside its range of finite representation. All rational numbers finitely representable in hexadecimal are also finitely representable in decimal,duodecimalandsexagesimal: that is, any hexadecimal number with a finite number of digits also has a finite number of digits when expressed in those other bases. Conversely, only a fraction of those finitely representable in the latter bases are finitely representable in hexadecimal. For example, decimal 0.1 corresponds to the infinite recurring representation 0.19in hexadecimal. However, hexadecimal is more efficient than duodecimal and sexagesimal for representing fractions with powers of two in the denominator. For example, 0.062510(one-sixteenth) is equivalent to 0.116, 0.0912, and 0;3,4560. The table below gives the expansions of some commonirrational numbersin decimal and hexadecimal. Powers of two have very simple expansions in hexadecimal. The first sixteen powers of two are shown below. The traditionalChinese units of measurementwere base-16. For example, one jīn (斤) in the old system equals sixteentaels. Thesuanpan(Chineseabacus) can be used to perform hexadecimal calculations such as additions and subtractions.[32] As with theduodecimalsystem, there have been occasional attempts to promote hexadecimal as the preferred numeral system. These attempts often propose specific pronunciation and symbols for the individual numerals.[33]Some proposals unify standard measures so that they are multiples of 16.[34][35]An early such proposal was put forward byJohn W. NystrominProject of a New System of Arithmetic, Weight, Measure and Coins: Proposed to be called the Tonal System, with Sixteen to the Base, published in 1862.[36]Nystrom among other things suggestedhexadecimal time, which subdivides a day by 16, so that there are 16 "hours" (or "10tims", pronouncedtontim) in a day.[37] The wordhexadecimalis first recorded in 1952.[38]It ismacaronicin the sense that it combinesGreekἕξ (hex) "six" withLatinate-decimal. The all-Latin alternativesexadecimal(compare the wordsexagesimalfor base 60) is older, and sees at least occasional use from the late 19th century.[39]It is still in use in the 1950s inBendixdocumentation. Schwartzman (1994) argues that use ofsexadecimalmay have been avoided because of its suggestive abbreviation tosex.[40]Many western languages since the 1960s have adopted terms equivalent in formation tohexadecimal(e.g. Frenchhexadécimal, Italianesadecimale, Romanianhexazecimal, Serbianхексадецимални, etc.) but others have introduced terms which substitute native words for "sixteen" (e.g. Greek δεκαεξαδικός, Icelandicsextándakerfi, Russianшестнадцатеричнойetc.) Terminology and notation did not become settled until the end of the 1960s. In 1969,Donald Knuthargued that the etymologically correct term would besenidenary, or possiblysedenary, a Latinate term intended to convey "grouped by 16" modelled onbinary,ternary,quaternary, etc. According to Knuth's argument, the correct terms fordecimalandoctalarithmetic would bedenaryandoctonary, respectively.[41]Alfred B. Taylor usedsenidenaryin his mid-1800s work on alternative number bases, although he rejected base 16 because of its "incommodious number of digits".[42][43] The now-current notation using the letters A to F establishes itself as the de facto standard beginning in 1966, in the wake of the publication of theFortran IVmanual forIBM System/360, which (unlike earlier variants of Fortran) recognizes a standard for entering hexadecimal constants.[44]As noted above, alternative notations were used byNEC(1960) and The Pacific Data Systems 1020 (1964). The standard adopted by IBM seems to have become widely adopted by 1968, when Bruce Alan Martin in his letter to the editor of theCACMcomplains that With the ridiculous choice of letters A, B, C, D, E, F as hexadecimal number symbols adding to already troublesome problems of distinguishing octal (or hex) numbers from decimal numbers (or variable names), the time is overripe for reconsideration of our number symbols. This should have been done before poor choices gelled into a de facto standard! Martin's argument was that use of numerals 0 to 9 in nondecimal numbers "imply to us a base-ten place-value scheme": "Why not use entirely new symbols (and names) for the seven or fifteen nonzero digits needed in octal or hex. Even use of the letters A through P would be an improvement, but entirely new symbols could reflect the binary nature of the system".[19]He also argued that "re-using alphabetic letters for numerical digits represents a gigantic backward step from the invention of distinct, non-alphabetic glyphs for numerals sixteen centuries ago" (asBrahmi numerals, and later in aHindu–Arabic numeral system), and that the recentASCIIstandards (ASA X3.4-1963 and USAS X3.4-1968) "should have preserved six code table positions following the ten decimal digits -- rather than needlessly filling these with punctuation characters" (":;<=>?") that might have been placed elsewhere among the 128 available positions. Base16(as a proper name without a space) can also refer to abinary to text encodingbelonging to the same family asBase32,Base58, andBase64. In this case, data is broken into 4-bit sequences, and each value (between 0 and 15 inclusively) is encoded using one of 16 symbols from theASCIIcharacter set. Although any 16 symbols from the ASCII character set can be used, in practice, the ASCII digits "0"–"9" and the letters "A"–"F" (or the lowercase "a"–"f") are always chosen in order to align with standard written notation for hexadecimal numbers. There are several advantages of Base16 encoding: The main disadvantages of Base16 encoding are: Support for Base16 encoding is ubiquitous in modern computing. It is the basis for theW3Cstandard forURL percent encoding, where a character is replaced with a percent sign "%" and its Base16-encoded form. Most modern programming languages directly include support for formatting and parsing Base16-encoded numbers.
https://en.wikipedia.org/wiki/Base16
ModelOps(modeloperationsor model operationalization), as defined byGartner, "is focused primarily on thegovernanceandlifecycle managementof a wide range of operationalizedartificial intelligence(AI) anddecision models, includingmachine learning,knowledge graphs, rules, optimization, linguistic andagent-based models" inMulti-Agent Systems.[1]"ModelOps lies at the heart of any enterprise AI strategy".[2]It orchestrates the model lifecycles of all models in production across the entire enterprise, from putting a model into production, then evaluating and updating the resulting application according to a set of governance rules, including both technical and business key performance indicators (KPI's). It grants business domain experts the capability to evaluate AI models in production, independent ofdata scientists.[3] AForbesarticle promoted ModelOps: "As enterprises scale up their AI initiatives to become a true Enterprise AI organization, having full operationalized analytics capability puts ModelOps in the center, connecting bothDataOpsandDevOps."[4] In a 2018 Gartner survey, 37% of respondents reported that they had deployed AI in some form; however, Gartner pointed out that enterprises were still far from implementing AI, citing deployment challenges.[5]Enterprises were accumulating undeployed, unused, and unrefreshed models, and manually deployed, often at a business unit level, increasing the risk exposure of the entire enterprise.[6]Independent analyst firm Forrester also covered this topic in a 2018 report on machine learning andpredictive analyticsvendors: “Data scientists regularly complain that their models are only sometimes or never deployed. A big part of the problem is organizational chaos in understanding how to apply and design models into applications. But another big part of the problem is technology. Models aren’t like software code because they need model management.”[7] In December 2018, Waldemar Hummer and Vinod Muthusamy of IBM Research AI, proposed ModelOps as "a programming model for reusable, platform-independent, and composable AI workflows" on IBM Programming Languages Day.[8]In their presentation, they noted the difference between the application development lifecycle, represented byDevOps, and the AI application lifecycle.[9] The goal for developing ModelOps was to address the gap between model deployment and model governance, ensuring that all models were running in production with strong governance, aligned with technical and business KPI's, while managing the risk. In their presentation, Hummer and Muthusamy described a programmatic solution for AI-aware staged deployment and reusable components that would enable model versions to match business apps, and which would include AI model concepts such as model monitoring, drift detection, and active learning. The solution would also address the tension between model performance and business KPI's, application and model logs, and model proxies and evolving policies. Various cloud platforms were part of the proposal. In June 2019, Hummer, Muthusamy, Thomas Rausch, Parijat Dube, and Kaoutar El Maghraoui presented a paper at the 2019 IEEE International Conference on Cloud Engineering (IC2E).[10]The paper expanded on their 2018 presentation, proposing ModelOps as a cloud-based framework and platform for end-to-end development and lifecycle management of artificial intelligence (AI) applications. In the abstract, they stated that the framework would show how it is possible to extend the principles of software lifecycle management to enable automation, trust, reliability, traceability, quality control, and reproducibility of AI model pipelines.[11]In March 2020, ModelOp, Inc. published the first comprehensive guide to ModelOps methodology. The objective of this publication was to provide an overview of the capabilities of ModelOps, as well as the technical and organizational requirements for implementing ModelOps practices.[12] One typical use case for ModelOps is in the financial services sector, where hundreds oftime-seriesmodels are used to focus on strict rules for bias and auditability. In these cases, model fairness and robustness are critical, meaning the models have to be fair and accurate, and they have to run reliably. ModelOps automates the model lifecycle of models in production. Such automation includes designing the model lifecycle, inclusive of technical, business and compliance KPI's and thresholds, to govern and monitor the model as it runs, monitoring the models for bias and other technical and business anomalies, and updating the model as needed without disrupting the applications. ModelOps is the dispatcher that keeps all of the trains running on time and on the right track, ensuring risk control, compliance and business performance. Another use case is the monitoring of a diabetic's blood sugar levels based on a patient's real-time data. The model that can predict hypoglycemia must be constantly refreshed with the current data, business KPI's and anomalies should be continuously monitored and must be available in a distributed environment, so the information is available on a mobile device as well as reporting to a larger system. The orchestration, governance, retraining, monitoring, and refreshing is done with ModelOps. The ModelOps process focuses on automating the governance, management and monitoring of models in production across the enterprise, enabling AI and application developers to easily plug in lifecycle capabilities (such as bias-detection, robustness and reliability, drift detection, technical, business and compliance KPI's, regulatory constraints and approval flows) for putting AI models into production as business applications. The process starts with a standard representation of candidate models for production that includes ametamodel(the model specification) with all of the component and dependent pieces that go into building the model, such as the data, the hardware and software environments, the classifiers, and code plug-ins, and most importantly, the business and compliance/risk KPI's. MLOps(machine learning operations) is a discipline that enables data scientists and IT professionals to collaborate and communicate while automating machine learning algorithms. It extends and expands on the principles ofDevOpsto support the automation of developing and deploying machine learning models and applications.[13]As a practice, MLOps involves routine machine learning (ML) models. However, the variety and uses of models have changed to include decision optimization models,optimizationmodels, andtransformational modelsthat are added to applications. ModelOps is an evolution of MLOps that expands its principles to include not just the routine deployment of machine learning models but also the continuous retraining, automated updating, and synchronized development and deployment of more complex machine learning models.[14]ModelOps refers to the operationalization of all AI models, including the machine learning models with which MLOps is concerned.[15]
https://en.wikipedia.org/wiki/ModelOps
Note to admins: In case of doubt, remove this template and post a message asking for review atWT:CP.Withthis script, go tothe history with auto-selected revisions. Note to the requestor: Make sure the page has already been reverted to a non-infringing revision or that infringing text has been removed or replaced before submitting this request. This template is reserved for obvious cases only, for other cases refer toWikipedia:Copyright problems. Data analysisis the process of inspecting,cleansing,transforming, andmodelingdatawith the goal of discovering useful information, informing conclusions, and supportingdecision-making.[1]Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains.[2]In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.[3] Data miningis a particular data analysis technique that focuses on statistical modeling and knowledge discovery for predictive rather than purely descriptive purposes, whilebusiness intelligencecovers data analysis that relies heavily on aggregation, focusing mainly on business information. In statistical applications, data analysis can be divided intodescriptive statistics,exploratory data analysis(EDA), andconfirmatory data analysis(CDA).[4]EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existinghypotheses.[5]Predictive analyticsfocuses on the application of statistical models for predictive forecasting or classification, whiletext analyticsapplies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a variety ofunstructured data. All of the above are varieties of data analysis.[6] Data analysisis aprocessfor obtainingraw data, and subsequently converting it into information useful for decision-making by users.[1]StatisticianJohn Tukey, defined data analysis in 1961, as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."[7] There are several phases, and they are [Iteration|iterative]], in that feedback from later phases may result in additional work in earlier phases.[8] The data is necessary as inputs to the analysis, which is specified based upon the requirements of those directing the analytics (or customers, who will use the finished product of the analysis).[9]The general type of entity upon which the data will be collected is referred to as anexperimental unit(e.g., a person or population of people). Specific variables regarding a population (e.g., age and income) may be specified and obtained. Data may be numerical or categorical (i.e., a text label for numbers).[8] Data may be collected from a variety of sources.[10]Alist of data sourcesare available for study & research. The requirements may be communicated by analysts tocustodiansof the data; such as,Information Technology personnelwithin an organization.[11]Data collectionordata gatheringis the process of gathering andmeasuringinformationon targeted variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. The data may also be collected from sensors in the environment, including traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation.[8] Data integrationis a precursor to data analysis: Data, when initially obtained, must be processed or organized for analysis. For instance, this may involve placing data into rows and columns in a table format (known asstructured data) for further analysis, often through the use of spreadsheet(excel) or statistical software.[8] Once processed and organized, the data may be incomplete, contain duplicates, or contain errors.[12]The need fordata cleaningwill arise from problems in the way that the data is entered and stored.[12][13]Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data, deduplication, and column segmentation.[14][15] Such data problems can also be identified through a variety of analytical techniques. For example; with financial information, the totals for particular variables may be compared against separately published numbers that are believed to be reliable.[16]Unusual amounts, above or below predetermined thresholds, may also be reviewed. There are several types of data cleaning that are dependent upon the type of data in the set; this could be phone numbers, email addresses, employers, or other values.[17]Quantitative data methods for outlier detection can be used to get rid of data that appears to have a higher likelihood of being input incorrectly. Text data spell checkers can be used to lessen the amount of mistyped words. However, it is harder to tell if the words are contextually (i.e., semantically and idiomatically) correct. Once the datasets are cleaned, they can then begin to be analyzed usingexploratory data analysis. The process of data exploration may result in additional data cleaning or additional requests for data; thus, the initialization of theiterative phasesmentioned above.[18]Descriptive statistics, such as the average, median, and standard deviation, are often used to broadly characterize the data.[19][20]Data visualizationis also used, in which the analyst is able to examine the data in a graphical format in order to obtain additional insights about messages within the data.[8] Mathematical formulasormodels(also known asalgorithms), may be applied to the data in order to identify relationships among the variables; for example, checking forcorrelationand by determining whether or not there is the presence ofcausality. In general terms, models may be developed to evaluate a specific variable based on other variable(s) contained within the dataset, with someresidual errordepending on the implemented model's accuracy (e.g., Data = Model + Error).[21] Inferential statisticsutilizes techniques that measure the relationships between particular variables.[22]For example,regression analysismay be used to model whether a change in advertising (independent variable X), provides an explanation for the variation in sales (dependent variable Y), i.e. is Y a function of X? This can be described as (Y=aX+b+ error), where the model is designed such that (a) and (b) minimize the error when the model predictsYfor a given range of values ofX.[23] Adata productis a computer application that takesdata inputsand generatesoutputs, feeding them back into the environment.[24]It may be based on a model or algorithm. For instance, an application that analyzes data about customer purchase history, and uses the results to recommend other purchases the customer might enjoy.[25][8] Once data is analyzed, it may be reported in many formats to the users of the analysis to support their requirements.[27]The users may have feedback, which results in additional analysis. When determining how to communicate the results, the analyst may consider implementing a variety of data visualization techniques to help communicate the message more clearly and efficiently to the audience. Data visualization usesinformation displays(graphics such as, tables and charts) to help communicate key messages contained in the data.Tablesare a valuable tool by enabling the ability of a user to query and focus on specific numbers; while charts (e.g., bar charts or line charts), may help explain the quantitative messages contained in the data.[28] Stephen Fewdescribed eight types of quantitative messages that users may attempt to communicate from a set of data, including the associated graphs.[29][30] AuthorJonathan Koomeyhas recommended a series of best practices for understanding quantitative data. These include:[16] For the variables under examination, analysts typically obtaindescriptive statistics, such as the mean (average),median, andstandard deviation. They may also analyze thedistributionof the key variables to see how the individual values cluster around the mean.[16] McKinsey and Companynamed a technique for breaking down a quantitative problem into its component parts called theMECE principle. MECE means "Mutually Exclusive and Collectively Exhaustive".[36]Each layer can be broken down into its components; each of the sub-components must bemutually exclusiveof each other andcollectivelyadd up to the layer above them. For example, profit by definition can be broken down into total revenue and total cost.[37] Analysts may use robust statistical measurements to solve certain analytical problems.Hypothesis testingis used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine whether that hypothesis is true or false.[38]For example, the hypothesis might be that "Unemployment has no effect on inflation", which relates to an economics concept called thePhillips Curve.[39]Hypothesis testing involves considering the likelihood ofType I and type II errors, which relate to whether the data supports accepting or rejecting the hypothesis.[40] Regression analysismay be used when the analyst is trying to determine the extent to which independent variable X affects dependent variable Y (e.g., "To what extent do changes in the unemployment rate (X) affect the inflation rate (Y)?").[41] Necessary condition analysis(NCA) may be used when the analyst is trying to determine the extent to which independent variable X allows variable Y (e.g., "To what extent is a certain unemployment rate (X) necessary for a certain inflation rate (Y)?").[41]Whereas (multiple) regression analysis uses additive logic where each X-variable can produce the outcome and the X's can compensate for each other (they are sufficient but not necessary),[42]necessary condition analysis (NCA) uses necessity logic, where one or more X-variables allow the outcome to exist, but may not produce it (they are necessary but not sufficient). Each single necessary condition must be present and compensation is not possible.[43] Users may have particular data points of interest within a data set, as opposed to the general messaging outlined above. Such low-level user analytic activities are presented in the following table. The taxonomy can also be organized by three poles of activities: retrieving values, finding data points, and arranging data points.[44][45][46] - How long is the movie Gone with the Wind? - What comedies have won awards? - Which funds underperformed the SP-500? - What is the gross income of all stores combined? - How many manufacturers of cars are there? - What director/film has won the most awards? - What Marvel Studios film has the most recent release date? - Rank the cereals by calories. - What is the range of car horsepowers? - What actresses are in the data set? - What is the age distribution of shoppers? - Are there any outliers in protein? - Is there a cluster of typical film lengths? - Is there a correlation between country of origin and MPG? - Do different genders have a preferred payment method? - Is there a trend of increasing film length over the years? Barriers to effective analysis may exist among the analysts performing the data analysis or among the audience. Distinguishing fact from opinion, cognitive biases, and innumeracy are all challenges to sound data analysis.[47] You are entitled to your own opinion, but you are not entitled to your own facts. Effective analysis requires obtaining relevantfactsto answer questions, support a conclusion or formalopinion, or testhypotheses.[48]Facts by definition are irrefutable, meaning that any person involved in the analysis should be able to agree upon them. The auditor of a public company must arrive at a formal opinion on whether financial statements of publicly traded corporations are "fairly stated, in all material respects".[49]This requires extensive analysis of factual data and evidence to support their opinion. There are a variety ofcognitive biasesthat can adversely affect analysis. For example,confirmation biasis the tendency to search for or interpret information in a way that confirms one's preconceptions.[50]In addition, individuals may discredit information that does not support their views.[51] Analysts may be trained specifically to be aware of these biases and how to overcome them.[52]In his bookPsychology of Intelligence Analysis, retired CIA analystRichards Heuerwrote that analysts should clearly delineate their assumptions and chains of inference and specify the degree and source of the uncertainty involved in the conclusions.[53]He emphasized procedures to help surface and debate alternative points of view.[54] Effective analysts are generally adept with a variety of numerical techniques. However, audiences may not have such literacy with numbers ornumeracy; they are said to be innumerate.[55]Persons communicating the data may also be attempting to mislead or misinform, deliberately using bad numerical techniques.[56] For example, whether a number is rising or falling may not be the key factor. More important may be the number relative to another number, such as the size of government revenue or spending relative to the size of the economy (GDP) or the amount of cost relative to revenue in corporate financial statements.[57]This numerical technique is referred to as normalization[16]or common-sizing. There are many such techniques employed by analysts, whether adjusting for inflation (i.e., comparing real vs. nominal data) or considering population increases, demographics, etc.[58] Analysts may also analyze data under different assumptions or scenarios. For example, when analysts performfinancial statement analysis, they will often recast the financial statements under different assumptions to help arrive at an estimate of future cash flow, which they then discount to present value based on some interest rate, to determine the valuation of the company or its stock.[59]Similarly, the CBO analyzes the effects of various policy options on the government's revenue, outlays and deficits, creating alternative future scenarios for key measures.[60] Analytics is the "extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions." It is a subset ofbusiness intelligence, which is a set of technologies and processes that uses data to understand and analyze business performance to drive decision-making.[61] Ineducation, most educators have access to adata systemfor the purpose of analyzing student data.[62]These data systems present data to educators in anover-the-counter dataformat (embedding labels, supplemental documentation, and a help system and making key package/display and content decisions) to improve the accuracy of educators' data analyses.[63] This section contains rather technical explanations that may assist practitioners but are beyond the typical scope of a Wikipedia article.[64] The most important distinction between the initial data analysis phase and the main analysis phase is that during initial data analysis one refrains from any analysis that is aimed at answering the original research question. The initial data analysis phase is guided by the following four questions:[65] The quality of the data should be checked as early as possible. Data quality can be assessed in several ways, using different types of analysis: frequency counts, descriptive statistics (mean, standard deviation, median), normality (skewness, kurtosis, frequency histograms), normalimputationis needed.[66] The quality of themeasurement instrumentsshould only be checked during the initial data analysis phase when this is not the focus or research question of the study.[70]One should check whether structure of measurement instruments corresponds to structure reported in the literature. There are two ways to assess measurement quality: After assessing the quality of the data and of the measurements, one might decide to impute missing data, or to perform initial transformations of one or more variables, although this can also be done during the main analysis phase.[73]Possible transformations of variables are:[74] One should check the success of therandomizationprocedure, for instance by checking whether background and substantive variables are equally distributed within and across groups. If the study did not need or use a randomization procedure, one should check the success of the non-random sampling, for instance by checking whether all subgroups of the population of interest are represented in the sample.[75]Other possible data distortions that should be checked are: In any report or article, the structure of the sample must be accurately described. It is especially important to exactly determine the size of the subgroup when subgroup analyses will be performed during the main analysis phase.[77]The characteristics of the data sample can be assessed by looking at: During the final stage, the findings of the initial data analysis are documented, and necessary, preferable, and possible corrective actions are taken. Also, the original plan for the main data analyses can and should be specified in more detail or rewritten. In order to do this, several decisions about the main data analyses can and should be made: Several analyses can be used during the initial data analysis phase:[80] It is important to take the measurement levels of the variables into account for the analyses, as special statistical techniques are available for each level:[81] Nonlinear analysis is often necessary when the data is recorded from anonlinear system. Nonlinear systems can exhibit complex dynamic effects includingbifurcations,chaos,harmonicsandsubharmonicsthat cannot be analyzed using simple linear methods. Nonlinear data analysis is closely related tononlinear system identification.[82] In the main analysis phase, analyses aimed at answering the research question are performed as well as any other relevant analysis needed to write the first draft of the research report.[83] In the main analysis phase, either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected.[84]In an exploratory analysis no clear hypothesis is stated before analysing the data, and the data is searched for models that describe the data well.[85]In a confirmatory analysis, clear hypotheses about the data are tested.[86] Exploratory data analysisshould be interpreted carefully. When testing multiple models at once there is a high chance on finding at least one of them to be significant, but this can be due to atype 1 error. It is important to always adjust the significance level when testing multiple models with, for example, aBonferroni correction.[87]Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset.[88]An exploratory analysis is used to find ideas for a theory, but not to test that theory as well.[88]When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the sametype 1 errorthat resulted in the exploratory model in the first place.[88]The confirmatory analysis therefore will not be more informative than the original exploratory analysis.[89] It is important to obtain some indication about how generalizable the results are.[90]While this is often difficult to check, one can look at the stability of the results. Are the results reliable and reproducible? There are two main ways of doing that. Free software for data analysis include: The typical data analysis workflow involves collecting data, running analyses, creating visualizations, and writing reports. However, this workflow presents challenges, including a separation between analysis scripts and data, as well as a gap between analysis and documentation. Often, the correct order of running scripts is only described informally or resides in the data scientist's memory. The potential for losing this information creates issues for reproducibility. To address these challenges, it is essential to document analysis script content and workflow. Additionally, overall documentation is crucial, as well as providing reports that are understandable by both machines and humans, and ensuring accurate representation of the analysis workflow even as scripts evolve.[97] Different companies and organizations hold data analysis contests to encourage researchers to utilize their data or to solve a particular question using data analysis. A few examples of well-known international data analysis contests are:
https://en.wikipedia.org/wiki/Data_analysis
Push technology,also known asserver Push,refers to a communication method, where the communication is initiated by aserverrather than a client. This approach is different from the "pull" method where the communication is initiated by a client.[1] In push technology, clients can express their preferences for certain types of information or data, typically through a process known as thepublish–subscribemodel. In this model, a client "subscribes" to specific information channels hosted by a server. When new content becomes available on these channels, the server automatically sends, or "pushes," this information to the subscribed client. Under certain conditions, such as restrictive security policies that block incomingHTTPrequests, push technology is sometimes simulated using a technique calledpolling.In these cases, the client periodically checks with the server to see if new information is available, rather than receiving automatic updates. Synchronous conferencingandinstant messagingare examples of push services. Chat messages and sometimesfilesare pushed to the user as soon as they are received by the messaging service. Both decentralizedpeer-to-peerprograms (such asWASTE) and centralized programs (such asIRCorXMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient. Emailmay also be a push system:SMTPis a push protocol (seePush e-mail). However, the last step—from mail server to desktop computer—typically uses a pull protocol likePOP3orIMAP. Modern e-mail clients make this step seem instantaneous by repeatedlypollingthe mail server, frequently checking it for new mail. The IMAP protocol includes theIDLEcommand, which allows the server to tell the client when new messages arrive. The originalBlackBerrywas the first popular example of push-email in a wireless context.[citation needed] Another example is thePointCast Network, which was widely covered in the 1990s. It delivered news and stock market data as a screensaver. BothNetscapeandMicrosoftintegrated push technology through theChannel Definition Format(CDF) into their software at the height of thebrowser wars, but it was never very popular. CDF faded away and was removed from the browsers of the time, replaced in the 2000s withRSS(a pull system.) Other uses of push-enabledweb applicationsinclude software updates distribution ("push updates"), market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles, andsensor networkmonitoring. The Web push proposal of theInternet Engineering Task Forceis a simple protocol usingHTTP version 2to deliver real-time events, such as incoming calls or messages, which can be delivered (or "pushed") in a timely fashion. The protocol consolidates allreal-timeevents into a single session which ensures more efficient use of network and radio resources. A single service consolidates all events, distributing those events to applications as they arrive. This requires just one session, avoiding duplicated overhead costs.[2] Web Notifications are part of theW3Cstandard and define anAPIfor end-user notifications. A notification allows alerting the user of an event, such as the delivery of an email, outside the context of a web page.[3]As part of this standard, Push API is fully implemented inChrome,Firefox, andEdge, and partially implemented inSafarias of February 2023[update].[4][5] HTTP server push (also known as HTTP streaming) is a mechanism for sending unsolicited (asynchronous) data from aweb serverto aweb browser. HTTP server push can be achieved through any of several mechanisms. As a part ofHTML5theWeb SocketAPI allows a web server and client to communicate over afull-duplexTCP connection. Generally, the web server does not terminate a connection after response data has been served to a client. The web server leaves the connection open so that if an event occurs (for example, a change in internal data which needs to be reported to one or multiple clients), it can be sent out immediately; otherwise, the event would have to be queued until the client's next request is received. Most web servers offer this functionality viaCGI(e.g., Non-Parsed Headers scripts onApache HTTP Server). The underlying mechanism for this approach ischunked transfer encoding. Another mechanism is related to a specialMIMEtype calledmultipart/x-mixed-replace, which was introduced byNetscapein 1995. Web browsers interpret this as a document that changes whenever the server pushes a new version to the client.[6]It is still supported byFirefox,Opera, andSafaritoday, but it is ignored byInternet Explorer[7]and is only partially supported byChrome.[8]It can be applied toHTMLdocuments, and also for streaming images inwebcamapplications. TheWHATWGWeb Applications 1.0 proposal[9]includes a mechanism to push content to the client. On September 1, 2006, the Opera web browser implemented this new experimental system in a feature called "Server-Sent Events".[10][11]It is now part of theHTML5standard.[12] In this technique, the server takes advantage ofpersistent HTTP connections, leaving the response perpetually "open" (i.e., the server never terminates the response), effectively fooling the browser to remain in "loading" mode after the initial page load could be considered complete. The server then periodically sends snippets ofJavaScriptto update the content of the page, thereby achieving push capability. By using this technique, the client doesn't needJava appletsor other plug-ins in order to keep an open connection to the server; the client is automatically notified about new events, pushed by the server.[13][14]One serious drawback to this method, however, is the lack of control the server has over the browser timing out; a page refresh is always necessary if a timeout occurs on the browser end. Long polling is itself not a true push; long polling is a variation of the traditional polling technique, but it allows emulating a push mechanism under circumstances where a real push is not possible, such as sites with security policies that require rejection of incoming HTTP requests. With long polling, the client requests to get more information from the server exactly as in normal polling, but with the expectation that the server may not respond immediately. If the server has no new information for the client when the poll is received, then instead of sending an empty response, the server holds the request open and waits for response information to become available. Once it does have new information, the server immediately sends an HTTP response to the client, completing the open HTTP request. Upon receipt of the server response, the client often immediately issues another server request. In this way the usual response latency (the time between when the information first becomes available and the next client request) otherwise associated with polling clients is eliminated.[15] For example,BOSHis a popular, long-lived HTTP technique used as a long-polling alternative to a continuous TCP connection when such a connection is difficult or impossible to employ directly (e.g., in a web browser);[16]it is also an underlying technology in theXMPP, which Apple uses for its iCloud push support. This technique, used bychatapplications, makes use of theXML Socketobject in a single-pixelAdobe Flashmovie. Under the control ofJavaScript, the client establishes aTCP connectionto aunidirectionalrelay on the server. The relay server does not read anything from thissocket; instead, it immediately sends the client aunique identifier. Next, the client makes anHTTP requestto the web server, including this identifier with it. The web application can then push messages addressed to the client to a local interface of the relay server, which relays them over the Flash socket. The advantage of this approach is that it appreciates the natural read-write asymmetry that is typical of many web applications, including chat, and as a consequence it offers high efficiency. Since it does not accept data on outgoing sockets, the relay server does not need to poll outgoing TCP connectionsat all, making it possible to hold open tens of thousands of concurrent connections. In this model, the limit to scale is the TCP stack of the underlying server operating system. In services such ascloud computing, to increase reliability and availability of data, it is usually pushed (replicated) to several machines. For example, the Hadoop Distributed File System (HDFS) makes 2 extra copies of any object stored. RGDD focuses on efficiently casting an object from one location to many while saving bandwidth by sending minimal number of copies (only one in the best case) of the object over any link across the network. For example, Datacast[17]is a scheme for delivery to many nodes inside data centers that relies on regular and structured topologies and DCCast[18]is a similar approach for delivery across data centers. A push notification is a message that is "pushed" from a back-end server or application to a user interface, e.g. mobile applications[19]or desktop applications.Appleintroduced push notifications foriPhonein 2009,[20]and in 2010Googlereleased "Google Cloud to Device Messaging" (superseded byGoogle Cloud Messagingand then byFirebase Cloud Messaging).[21]In November 2015,Microsoftannounced that theWindows Notification Servicewould be expanded to make use of the Universal Windows Platform architecture, allowing for push data to be sent toWindows 10,Windows 10 Mobile,Xbox, and other supported platforms using universal API calls and POST requests.[22] Push notifications are mainly divided into two approaches, local notifications and remote notifications.[23]For local notifications, the application schedules the notification with the local device's OS. The application sets a timer in the application itself, provided it is able to continuously run in the background. When the event's scheduled time is reached, or the event's programmed condition is met, the message is displayed in the application's user interface. Remote notifications are handled by a remote server. Under this scenario, the client application needs to be registered on the server with a unique key (e.g., aUUID). The server then fires the message against the unique key to deliver it to the client via an agreed client/server protocol such asHTTPorXMPP, and the client displays the message received. When the push notification arrives, it can transmit short notifications and messages, set badges on application icons, blink or continuously light up thenotification LED, or play alert sounds to attract user's attention.[24]Push notifications are usually used by applications to bring information to users' attention. The content of the messages can be classified in the following example categories: Real-time push notifications may raise privacy issues since they can be used to bind virtual identities of social network pseudonyms to the real identities of the smartphone owners.[26]The use of unnecessary push notifications for promotional purposes has been criticized as an example ofattention theft.[27]
https://en.wikipedia.org/wiki/Push_technology
Domain hijackingordomain theftis the act of changing the registration of adomain namewithout the permission of its original registrant, or by abuse of privileges on domain hosting and registrar software systems.[1] This can be devastating to the original domain name holder, not only financially as they may have derived commercial income from a website hosted at the domain or conducted business through that domain's e-mail accounts,[2]but also in terms of readership and/or audience for non-profit or artistic web addresses. After a successful hijacking, the hijacker can use the domain name to facilitate other illegal activity such asphishing, where a website is replaced by an identical website that recordsprivate informationsuch as log-inpasswords,spam, or may distributemalwarefrom the perceived "trusted" domain.[3] Domain hijacking can be done in several ways, generally by unauthorized access to, or exploiting a vulnerability in the domain name registrar's system, throughsocial engineering, or getting into the domain owner's email account that is associated with the domain name registration.[4] A frequent tactic used by domain hijackers is to use acquired personal information about the actual domain owner to impersonate them and persuade the domainregistrarto modify the registration information and/or transfer the domain to another registrar, a form ofidentity theft. Once this has been done, the hijacker has full control of the domain and can use it or sell it to a third party. Other methods include email vulnerability, vulnerability at the domain-registration level, keyloggers, and phishing sites.[5] Responses to discovered hijackings vary; sometimes the registration information can be returned to its original state by the current registrar, but this may be more difficult if the domain name was transferred to another registrar, particularly if that registrar resides in another country. If the stolen domain name has been transferred to another registrar, the losing registrar may invoke ICANN's Registrar Transfer Dispute Resolution Policy to seek the return of the domain.[6] In some cases, the losing registrar for the domain name is not able to regain control over the domain, and the domain name owner may need to pursue legal action to obtain the court ordered return of the domain.[7]In some jurisdictions, police may arrest cybercriminals involved, or prosecutors may fileindictments.[8] Although the legal status of domain hijacking was formerly thought to be unclear,[9]certain U.S. federal courts in particular have begun to accept causes of action seeking the return of stolen domain names.[10]Domain hijacking is analogous with theft, in that the original owner is deprived of the benefits of the domain, butthefttraditionally relates to concrete goods such as jewelry and electronics, whereas domain name ownership is stored only in the digital state of the domain name registry, a network of computers. For this reason, court actions seeking the recovery of stolen domain names are most frequently filed in the location of the relevant domain registry.[11]In some cases, victims have pursued recovery of stolen domain names through ICANN'sUniform Domain Name Dispute Resolution Policy(UDRP), but a number of UDRP panels have ruled that the policy is not appropriate for cases involving domain theft. Additionally, police may arrest cybercriminals involved.[8][12][13][14][15] ICANN imposes a 60-day waiting period between a change in registration information and a transfer to another registrar. This is intended to make domain hijacking more difficult, since a transferred domain is much more difficult to reclaim, and it is more likely that the original registrant will discover the change in that period and alert the registrar.Extensible Provisioning Protocolis used for manyTLDregistries, and uses an authorization code issued exclusively to the domain registrant as a security measure to prevent unauthorized transfers.[25]
https://en.wikipedia.org/wiki/Domain_hijacking
Inquantum mechanics, theRenninger negative-result experimentis athought experimentthat illustrates some of the difficulties of understanding the nature ofwave function collapseandmeasurementin quantum mechanics. The statement is that a particle need not be detected in order for a quantum measurement to occur, and that the lack of a particle detection can also constitute a measurement. The thought experiment was first posed in 1953 byMauritius Renninger. The non-detection of a particle in one arm of an interferometer implies that the particle must be in the other arm. It can be understood to be a refinement of the paradox presented in theMott problem. TheMott problemconcerns the paradox of reconciling the spherical wave function describing the emission of analpha rayby a radioactive nucleus, with the linear tracks seen in acloud chamber. Formulated in 1927 byAlbert EinsteinandMax Born[citation needed], it was resolved by a calculation done by SirNevill Francis Mottthat showed that the correct quantum mechanical system must include the wave functions for the atoms in the cloud chamber as well as that for the alpha ray. The calculation showed that the resulting probability is non-zero only on straight lines raying out from the decayed atom; that is, once the measurement is performed, the wave-function becomes non-vanishing only near the classical trajectory of a particle. In Renninger's 1960 formulation, the cloud chamber is replaced by a pair of hemisphericalparticle detectors, completely surrounding a radioactive atom at the center that is about to decay by emitting an alpha ray. For the purposes of the thought experiment, the detectors are assumed to be 100% efficient, so that the emitted alpha ray is always detected. By consideration of the normal process of quantum measurement, it is clear that if one detector registers the decay, then the other will not: a single particle cannot be detected by both detectors. The core observation is that the non-observation of a particle on one of the shells is just as good a measurement as detecting it on the other. The strength of the paradox can be heightened by considering the two hemispheres to be of different diameters; with the outer shell a good distance farther away. In this case, after the non-observation of the alpha ray on the inner shell, one is led to conclude that the (originally spherical) wave function has "collapsed" to a hemisphere shape, and (because the outer shell is distant) is still in the process of propagating to the outer shell, where it is guaranteed to eventually be detected. In the standard quantum-mechanical formulation, the statement is that the wave-function has partially collapsed, and has taken on a hemispherical shape. The full collapse of the wave function, down to a single point, does not occur until it interacts with the outer hemisphere. The conundrum of this thought experiment lies in the idea that the wave function interacted with the inner shell, causing a partial collapse of the wave function, without actually triggering any of the detectors on the inner shell. This illustrates that wave function collapse can occur even in the absence of particle detection. There are a number of common objections to the standard interpretation of the experiment. Some of these objections, and standard rebuttals, are listed below. It is sometimes noted that the time of the decay of the nucleus cannot be controlled, and that the finitehalf-lifeinvalidates the result. This objection can be dispelled by sizing the hemispheres appropriately with regards to the half-life of the nucleus. The radii are chosen so that the more distant hemisphere is much farther away than the half-life of the decaying nucleus, times the flight-time of the alpha ray. To lend concreteness to the example, assume that the half-life of the decaying nucleus is 0.01 microsecond (mostelementary particledecay half-lives are much shorter; mostnuclear decayhalf-lives are much longer; some atomic electromagnetic excitations have a half-life about this long). If one were to wait 0.4 microseconds, then the probability that the particle will have decayed will be1−2−40≃1−10−12{\displaystyle 1-2^{-40}\simeq 1-10^{-12}}; that is, the probability will be very very close to one. The outer hemisphere is then placed at (speed of light) times (0.4 microseconds) away: that is, at about 120 meters away. The inner hemisphere is taken to be much closer, say at 1 meter. If, after (for example) 0.3 microseconds, one has not seen the decay product on the inner, closer, hemisphere, one can conclude that the particle has decayed with almost absolute certainty, but is still in-flight to the outer hemisphere. The paradox then concerns the correct description of the wave function in such a scenario. Another common objection states that the decay particle was always travelling in a straight line, and that only the probability of the distribution is spherical. This, however, is a mis-interpretation of theMott problem, and is false. The wave function was truly spherical, and is not theincoherent superposition(mixed state) of a large number of plane waves. The distinction between mixed andpure statesis illustrated more clearly in a different context, in the debate comparing the ideas behindlocal-hidden variablesand their refutation by means of theBell inequalities. A true quantum-mechanical wave would diffract from the inner hemisphere, leaving adiffractionpattern to be observed on the outer hemisphere. This is not really an objection, but rather an affirmation that a partial collapse of the wave function has occurred. If a diffraction pattern were not observed, one would be forced to conclude that the particle had collapsed down to a ray, and stayed that way, as it passed the inner hemisphere; this is clearly at odds with standard quantum mechanics. Diffraction from the inner hemisphere is expected. In this objection, it is noted that in real life, a decay product is either spin-1/2 (afermion) or aphoton(spin-1). This is taken to mean that the decay is not truly sphere symmetric, but rather has some other distribution, such as a p-wave. However, on closer examination, one sees this has no bearing on the spherical symmetry of the wave-function. Even if the initial state could be polarized; for example, by placing it in a magnetic field, the non-spherical decay pattern is still properly described by quantum mechanics. The above formulation is inherently phrased in a non-relativistic language; and it is noted that elementary particles have relativistic decay products. This objection only serves to confuse the issue. The experiment can be reformulated so that the decay product is slow-moving. At any rate,special relativityis not in conflict with quantum mechanics. This objection states that in real life, particle detectors are imperfect, and sometimes neither the detectors on the one hemisphere, nor the other, will go off. This argument only serves to confuse the issue, and has no bearing on the fundamental nature of the wave-function.
https://en.wikipedia.org/wiki/Renninger_negative-result_experiment
Inprobability theoryandstatistics,Campbell's theoremor theCampbell–Hardy theoremis either a particularequationor set of results relating to theexpectationof afunctionsummed over apoint processto anintegralinvolving themean measureof the point process, which allows for the calculation ofexpected valueandvarianceof therandomsum. One version of the theorem,[1]also known asCampbell's formula,[2]: 28entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process.[2]There also exist equations involvingmoment measuresandfactorial moment measuresthat are considered versions of Campbell's formula. All these results are employed in probability and statistics with a particular importance in the theory ofpoint processes[3]andqueueing theory[4]as well as the related fieldsstochastic geometry,[1]continuum percolation theory,[5]andspatial statistics.[2][6] Another result by the name of Campbell's theorem[7]is specifically for thePoisson point processand gives a method for calculatingmomentsas well as theLaplace functionalof a Poisson point process. The name of both theorems stems from the work[8][9]byNorman R. Campbellonthermionicnoise, also known asshot noise, invacuum tubes,[3][10]which was partly inspired by the work ofErnest RutherfordandHans Geigeronalpha particledetection, where thePoisson point processarose as a solution to a family of differential equations byHarry Bateman.[10]In Campbell's work, he presents the moments andgenerating functionsof the random sum of a Poisson process on the real line, but remarks that the main mathematical argument was due toG. H. Hardy, which has inspired the result to be sometimes called theCampbell–Hardy theorem.[10][11] For a point processN{\displaystyle N}defined ond-dimensionalEuclidean spaceRd{\displaystyle {\textbf {R}}^{d}},[a]Campbell's theorem offers a way to calculate expectations of a real-valued functionf{\displaystyle f}defined also onRd{\displaystyle {\textbf {R}}^{d}}and summed overN{\displaystyle N}, namely: whereE{\displaystyle E}denotes the expectation and set notation is used such thatN{\displaystyle N}is considered as a random set (seePoint process notation). For a point processN{\displaystyle N}, Campbell's theorem relates the above expectation with the intensity measureΛ{\displaystyle \Lambda }. In relation to aBorel setBthe intensity measure ofN{\displaystyle N}is defined as: where themeasurenotation is used such thatN{\displaystyle N}is considered a randomcounting measure. The quantityΛ(B){\displaystyle \Lambda (B)}can be interpreted as the average number of points of the point processN{\displaystyle N}located in the setB. One version of Campbell's theorem for a general (not necessarily simple) point processN{\displaystyle N}with intensity measure: is known asCampbell's formula[2]orCampbell's theorem,[1][12][13]which gives a method for calculating expectations of sums ofmeasurable functionsf{\displaystyle f}withrangeson thereal line. More specifically, for a point processN{\displaystyle N}and a measurable functionf:Rd→R{\displaystyle f:{\textbf {R}}^{d}\rightarrow {\textbf {R}}}, the sum off{\displaystyle f}over the point process is given by the equation: where if one side of the equation is finite, then so is the other side.[14]This equation is essentially an application ofFubini's theorem[1]and it holds for a wide class of point processes, simple or not.[2]Depending on the integral notation,[b]this integral may also be written as:[14] If the intensity measureΛ{\displaystyle \Lambda }of a point processN{\displaystyle N}has a densityλ(x){\displaystyle \lambda (x)}, then Campbell's formula becomes: For a stationary point processN{\displaystyle N}with constant densityλ>0{\displaystyle \lambda >0},Campbell's theoremorformulareduces to a volume integral: This equation naturally holds for the homogeneous Poisson point processes, which is an example of astationary stochastic process.[1] Campbell's theorem for general point processes gives a method for calculating the expectation of a function of a point (of a point process) summed over all the points in the point process. These random sums over point processes have applications in many areas where they are used as mathematical models. Campbell originally studied a problem of random sums motivated by understanding thermionic noise in valves, which is also known as shot-noise. Consequently, the study of random sums of functions over point processes is known as shot noise in probability and, particularly, point process theory. In wireless network communication, when a transmitter is trying to send a signal to a receiver, all the other transmitters in the network can be considered as interference, which poses a similar problem as noise does in traditional wired telecommunication networks in terms of the ability to send data based on information theory. If the positioning of the interfering transmitters are assumed to form some point process, then shot noise can be used to model the sum of their interfering signals, which has led to stochastic geometry models of wireless networks.[15] The total input in neurons is the sum of many synaptic inputs with similar time courses. When the inputs are modeled as independent Poisson point process, the mean current and its variance are given by Campbell theorem. A common extension is to consider a sum with random amplitudes In this case the cumulantsκi{\displaystyle \kappa _{i}}ofS{\displaystyle S}equal whereai¯{\displaystyle {\overline {a^{i}}}}are the raw moments of the distribution ofa{\displaystyle a}.[16] For general point processes, other more general versions of Campbell's theorem exist depending on the nature of the random sum and in particular the function being summed over the point process. If the function is a function of more than one point of the point process, themoment measuresorfactorial moment measuresof the point process are needed, which can be compared to moments and factorial of random variables. The type of measure needed depends on whether the points of the point process in the random sum are need to be distinct or may repeat. Moment measures are used when points are allowed to repeat. Factorial moment measures are used when points are not allowed to repeat, hence points are distinct. For general point processes, Campbell's theorem is only for sums of functions of a single point of the point process. To calculate the sum of a function of a single point as well as the entire point process, then generalized Campbell's theorems are required using the Palm distribution of the point process, which is based on the branch of probability known as Palm theory orPalm calculus. Another version of Campbell's theorem[7]says that for a Poisson point processN{\displaystyle N}with intensity measureΛ{\displaystyle \Lambda }and a measurable functionf:Rd→R{\displaystyle f:{\textbf {R}}^{d}\rightarrow {\textbf {R}}}, the random sum isabsolutely convergentwithprobability oneif and only ifthe integral Provided that this integral is finite, then the theorem further asserts that for anycomplexvalueθ{\displaystyle \theta }the equation holds if the integral on the right-hand sideconverges, which is the case for purelyimaginaryθ{\displaystyle \theta }. Moreover, and if this integral converges, then whereVar(S){\displaystyle {\text{Var}}(S)}denotes thevarianceof the random sumS{\displaystyle S}. From this theorem some expectation results for thePoisson point processfollow, including itsLaplace functional.[7][c] For a Poisson point processN{\displaystyle N}with intensity measureΛ{\displaystyle \Lambda }, theLaplace functionalis a consequence of the above version of Campbell's theorem[7]and is given by:[15] which for the homogeneous case is:
https://en.wikipedia.org/wiki/Campbell%27s_theorem_(probability)
Incommutative algebraandalgebraic geometry,localizationis a formal way to introduce the "denominators" to a givenringormodule. That is, it introduces a new ring/module out of an existing ring/moduleR, so that it consists offractionsms,{\displaystyle {\frac {m}{s}},}such that thedenominatorsbelongs to a given subsetSofR. IfSis the set of the non-zero elements of anintegral domain, then the localization is thefield of fractions: this case generalizes the construction of the fieldQ{\displaystyle \mathbb {Q} }ofrational numbersfrom the ringZ{\displaystyle \mathbb {Z} }ofintegers. The technique has become fundamental, particularly inalgebraic geometry, as it provides a natural link tosheaftheory. In fact, the termlocalizationoriginated inalgebraic geometry: ifRis a ring offunctionsdefined on some geometric object (algebraic variety)V, and one wants to study this variety "locally" near a pointp, then one considers the setSof all functions that are not zero atpand localizesRwith respect toS. The resulting ringS−1R{\displaystyle S^{-1}R}contains information about the behavior ofVnearp, and excludes information that is not "local", such as thezeros of functionsthat are outsideV(cf. the example given atlocal ring). The localization of acommutative ringRby amultiplicatively closed setSis a new ringS−1R{\displaystyle S^{-1}R}whose elements are fractions with numerators inRand denominators inS. If the ring is anintegral domainthe construction generalizes and follows closely that of thefield of fractions, and, in particular, that of therational numbersas the field of fractions of the integers. For rings that havezero divisors, the construction is similar but requires more care. Localization is commonly done with respect to amultiplicatively closed setS(also called amultiplicative setor amultiplicative system) of elements of a ringR, that is a subset ofRthat isclosedunder multiplication, and contains1. The requirement thatSmust be a multiplicative set is natural, since it implies that all denominators introduced by the localization belong toS. The localization by a setUthat is not multiplicatively closed can also be defined, by taking as possible denominators all products of elements ofU. However, the same localization is obtained by using the multiplicatively closed setSof all products of elements ofU. As this often makes reasoning and notation simpler, it is standard practice to consider only localizations by multiplicative sets. For example, the localization by a single elementsintroduces fractions of the formas,{\displaystyle {\tfrac {a}{s}},}but also products of such fractions, such asabs2.{\displaystyle {\tfrac {ab}{s^{2}}}.}So, the denominators will belong to the multiplicative set{1,s,s2,s3,…}{\displaystyle \{1,s,s^{2},s^{3},\ldots \}}of the powers ofs. Therefore, one generally talks of "the localization by the powers of an element" rather than of "the localization by an element". The localization of a ringRby a multiplicative setSis generally denotedS−1R,{\displaystyle S^{-1}R,}but other notations are commonly used in some special cases: ifS={1,t,t2,…}{\displaystyle S=\{1,t,t^{2},\ldots \}}consists of the powers of a single element,S−1R{\displaystyle S^{-1}R}is often denotedRt;{\displaystyle R_{t};}ifS=R∖p{\displaystyle S=R\setminus {\mathfrak {p}}}is thecomplementof aprime idealp{\displaystyle {\mathfrak {p}}}, thenS−1R{\displaystyle S^{-1}R}is denotedRp.{\displaystyle R_{\mathfrak {p}}.} In the remainder of this article, only localizations by a multiplicative set are considered. When the ringRis anintegral domainandSdoes not contain0, the ringS−1R{\displaystyle S^{-1}R}is a subring of thefield of fractionsofR. As such, the localization of a domain is a domain. More precisely, it is thesubringof the field of fractions ofR, that consists of the fractionsas{\displaystyle {\tfrac {a}{s}}}such thats∈S.{\displaystyle s\in S.}This is a subring since the sumas+bt=at+bsst,{\displaystyle {\tfrac {a}{s}}+{\tfrac {b}{t}}={\tfrac {at+bs}{st}},}and the productasbt=abst{\displaystyle {\tfrac {a}{s}}\,{\tfrac {b}{t}}={\tfrac {ab}{st}}}of two elements ofS−1R{\displaystyle S^{-1}R}are inS−1R.{\displaystyle S^{-1}R.}This results from the defining property of a multiplicative set, which implies also that1=11∈S−1R.{\displaystyle 1={\tfrac {1}{1}}\in S^{-1}R.}In this case,Ris a subring ofS−1R.{\displaystyle S^{-1}R.}It is shown below that this is no longer true in general, typically whenScontainszero divisors. For example, thedecimal fractionsare the localization of the ring of integers by the multiplicative set of the powers of ten. In this case,S−1R{\displaystyle S^{-1}R}consists of the rational numbers that can be written asn10k,{\displaystyle {\tfrac {n}{10^{k}}},}wherenis an integer, andkis a nonnegative integer. In the general case, a problem arises withzero divisors. LetSbe a multiplicative set in a commutative ringR. Suppose thats∈S,{\displaystyle s\in S,}and0≠a∈R{\displaystyle 0\neq a\in R}is a zero divisor withas=0.{\displaystyle as=0.}Thena1{\displaystyle {\tfrac {a}{1}}}is the image inS−1R{\displaystyle S^{-1}R}ofa∈R,{\displaystyle a\in R,}and one hasa1=ass=0s=01.{\displaystyle {\tfrac {a}{1}}={\tfrac {as}{s}}={\tfrac {0}{s}}={\tfrac {0}{1}}.}Thus some nonzero elements ofRmust be zero inS−1R.{\displaystyle S^{-1}R.}The construction that follows is designed for taking this into account. GivenRandSas above, one considers theequivalence relationonR×S{\displaystyle R\times S}that is defined by(r1,s1)∼(r2,s2){\displaystyle (r_{1},s_{1})\sim (r_{2},s_{2})}if there exists at∈S{\displaystyle t\in S}such thatt(s1r2−s2r1)=0.{\displaystyle t(s_{1}r_{2}-s_{2}r_{1})=0.} The localizationS−1R{\displaystyle S^{-1}R}is defined as the set of theequivalence classesfor this relation. The class of(r,s)is denoted asrs,{\displaystyle {\frac {r}{s}},}r/s,{\displaystyle r/s,}ors−1r.{\displaystyle s^{-1}r.}So, one hasr1s1=r2s2{\displaystyle {\tfrac {r_{1}}{s_{1}}}={\tfrac {r_{2}}{s_{2}}}}if and only if there is at∈S{\displaystyle t\in S}such thatt(s1r2−s2r1)=0.{\displaystyle t(s_{1}r_{2}-s_{2}r_{1})=0.}The reason for thet{\displaystyle t}is to handle cases such as the abovea1=01,{\displaystyle {\tfrac {a}{1}}={\tfrac {0}{1}},}wheres1r2−s2r1{\displaystyle s_{1}r_{2}-s_{2}r_{1}}is nonzero even though the fractions should be regarded as equal. The localizationS−1R{\displaystyle S^{-1}R}is a commutative ring with addition multiplication additive identity01,{\displaystyle {\tfrac {0}{1}},}andmultiplicative identity11.{\displaystyle {\tfrac {1}{1}}.} Thefunction defines aring homomorphismfromR{\displaystyle R}intoS−1R,{\displaystyle S^{-1}R,}which isinjectiveif and only ifSdoes not contain any zero divisors. If0∈S,{\displaystyle 0\in S,}thenS−1R{\displaystyle S^{-1}R}is thezero ringthat has only one unique element0. IfSis the set of allregular elementsofR(that is the elements that are not zero divisors),S−1R{\displaystyle S^{-1}R}is called thetotal ring of fractionsofR. The (above defined) ring homomorphismj:R→S−1R{\displaystyle j\colon R\to S^{-1}R}satisfies auniversal propertythat is described below. This characterizesS−1R{\displaystyle S^{-1}R}up to anisomorphism. So all properties of localizations can be deduced from the universal property, independently from the way they have been constructed. Moreover, many important properties of localization are easily deduced from the general properties of universal properties, while their direct proof may be more technical. The universal property satisfied byj:R→S−1R{\displaystyle j\colon R\to S^{-1}R}is the following: Usingcategory theory, this can be expressed by saying that localization is afunctorthat isleft adjointto aforgetful functor. More precisely, letC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}be the categories whose objects arepairsof a commutative ring and asubmonoidof, respectively, the multiplicativemonoidor thegroup of unitsof the ring. Themorphismsof these categories are the ring homomorphisms that map the submonoid of the first object into the submonoid of the second one. Finally, letF:D→C{\displaystyle {\mathcal {F}}\colon {\mathcal {D}}\to {\mathcal {C}}}be the forgetful functor that forgets that the elements of the second element of the pair are invertible. Then the factorizationf=g∘j{\displaystyle f=g\circ j}of the universal property defines a bijection This may seem a rather tricky way of expressing the universal property, but it is useful for showing easily many properties, by using the fact that the composition of two left adjoint functors is a left adjoint functor. Localization is a rich construction that has many useful properties. In this section, only the properties relative to rings and to a single localization are considered. Properties concerningideals,modules, or several multiplicative sets are considered in other sections. LetS⊆R{\displaystyle S\subseteq R}be a multiplicative set. ThesaturationS^{\displaystyle {\hat {S}}}ofS{\displaystyle S}is the set The multiplicative setSissaturatedif it equals its saturation, that is, ifS^=S{\displaystyle {\hat {S}}=S}, or equivalently, ifrs∈S{\displaystyle rs\in S}implies thatrandsare inS. IfSis not saturated, andrs∈S,{\displaystyle rs\in S,}thensrs{\displaystyle {\frac {s}{rs}}}is amultiplicative inverseof the image ofrinS−1R.{\displaystyle S^{-1}R.}So, the images of the elements ofS^{\displaystyle {\hat {S}}}are all invertible inS−1R,{\displaystyle S^{-1}R,}and the universal property implies thatS−1R{\displaystyle S^{-1}R}andS^−1R{\displaystyle {\hat {S}}{}^{-1}R}arecanonically isomorphic, that is, there is a unique isomorphism between them that fixes the images of the elements ofR. IfSandTare two multiplicative sets, thenS−1R{\displaystyle S^{-1}R}andT−1R{\displaystyle T^{-1}R}are isomorphic if and only if they have the same saturation, or, equivalently, ifsbelongs to one of the multiplicative sets, then there existst∈R{\displaystyle t\in R}such thatstbelongs to the other. Saturated multiplicative sets are not widely used explicitly, since, for verifying that a set is saturated, one must knowallunitsof the ring. The termlocalizationoriginates in the general trend of modern mathematics to studygeometricalandtopologicalobjectslocally, that is in terms of their behavior near each point. Examples of this trend are the fundamental concepts ofmanifolds,germsandsheafs. Inalgebraic geometry, anaffine algebraic setcan be identified with aquotient ringof apolynomial ringin such a way that the points of the algebraic set correspond to themaximal idealsof the ring (this isHilbert's Nullstellensatz). This correspondence has been generalized for making the set of theprime idealsof acommutative ringatopological spaceequipped with theZariski topology; this topological space is called thespectrum of the ring. In this context, alocalizationby a multiplicative set may be viewed as the restriction of the spectrum of a ring to the subspace of the prime ideals (viewed aspoints) that do not intersect the multiplicative set. Two classes of localizations are more commonly considered: Innumber theoryandalgebraic topology, when working over the ringZ{\displaystyle \mathbb {Z} }ofintegers, one refers to a property relative to an integernas a property trueatnorawayfromn, depending on the localization that is considered. "Away fromn" means that the property is considered after localization by the powers ofn, and, ifpis aprime number, "atp" means that the property is considered after localization at the prime idealpZ{\displaystyle p\mathbb {Z} }. This terminology can be explained by the fact that, ifpis prime, the nonzero prime ideals of the localization ofZ{\displaystyle \mathbb {Z} }are either thesingleton set{p}or its complement in the set of prime numbers. LetSbe a multiplicative set in a commutative ringR, andj:R→S−1R{\displaystyle j\colon R\to S^{-1}R}be the canonical ring homomorphism. Given anidealIinR, letS−1I{\displaystyle S^{-1}I}the set of the fractions inS−1R{\displaystyle S^{-1}R}whose numerator is inI. This is an ideal ofS−1R,{\displaystyle S^{-1}R,}which is generated byj(I), and called thelocalizationofIbyS. ThesaturationofIbySisj−1(S−1I);{\displaystyle j^{-1}(S^{-1}I);}it is an ideal ofR, which can also defined as the set of the elementsr∈R{\displaystyle r\in R}such that there existss∈S{\displaystyle s\in S}withsr∈I.{\displaystyle sr\in I.} Many properties of ideals are either preserved by saturation and localization, or can be characterized by simpler properties of localization and saturation. In what follows,Sis a multiplicative set in a ringR, andIandJare ideals ofR; the saturation of an idealIby a multiplicative setSis denotedsatS⁡(I),{\displaystyle \operatorname {sat} _{S}(I),}or, when the multiplicative setSis clear from the context,sat⁡(I).{\displaystyle \operatorname {sat} (I).} LetRbe acommutative ring,Sbe amultiplicative setinR, andMbe anR-module. Thelocalization of the moduleMbyS, denotedS−1M, is anS−1R-module that is constructed exactly as the localization ofR, except that the numerators of the fractions belong toM. That is, as a set, it consists ofequivalence classes, denotedms{\displaystyle {\frac {m}{s}}}, of pairs(m,s), wherem∈M{\displaystyle m\in M}ands∈S,{\displaystyle s\in S,}and two pairs(m,s)and(n,t)are equivalent if there is an elementuinSsuch that Addition and scalar multiplication are defined as for usual fractions (in the following formula,r∈R,{\displaystyle r\in R,}s,t∈S,{\displaystyle s,t\in S,}andm,n∈M{\displaystyle m,n\in M}): Moreover,S−1Mis also anR-module with scalar multiplication It is straightforward to check that these operations are well-defined, that is, they give the same result for different choices of representatives of fractions. The localization of a module can be equivalently defined by usingtensor products: The proof of equivalence (up to acanonical isomorphism) can be done by showing that the two definitions satisfy the same universal property. IfMis asubmoduleof anR-moduleN, andSis a multiplicative set inR, one hasS−1M⊆S−1N.{\displaystyle S^{-1}M\subseteq S^{-1}N.}This implies that, iff:M→N{\displaystyle f\colon M\to N}is aninjectivemodule homomorphism, then is also an injective homomorphism. Since the tensor product is aright exact functor, this implies that localization bySmapsexact sequencesofR-modules to exact sequences ofS−1R{\displaystyle S^{-1}R}-modules. In other words, localization is anexact functor, andS−1R{\displaystyle S^{-1}R}is aflatR-module. This flatness and the fact that localization solves auniversal propertymake that localization preserves many properties of modules and rings, and is compatible with solutions of other universal properties. For example, thenatural map is an isomorphism. IfM{\displaystyle M}is afinitely presented module, the natural map is also an isomorphism.[4] If a moduleMis afinitely generatedoverR, one has whereAnn{\displaystyle \operatorname {Ann} }denotesannihilator, that is the ideal of the elements of the ring that map to zero all elements of the module.[5]In particular, that is, iftM=0{\displaystyle tM=0}for somet∈S.{\displaystyle t\in S.}[6] The definition of aprime idealimplies immediately that thecomplementS=R∖p{\displaystyle S=R\setminus {\mathfrak {p}}}of a prime idealp{\displaystyle {\mathfrak {p}}}in a commutative ringRis a multiplicative set. In this case, the localizationS−1R{\displaystyle S^{-1}R}is commonly denotedRp.{\displaystyle R_{\mathfrak {p}}.}The ringRp{\displaystyle R_{\mathfrak {p}}}is alocal ring, that is calledthe local ring ofRatp.{\displaystyle {\mathfrak {p}}.}This means thatpRp=p⊗RRp{\displaystyle {\mathfrak {p}}\,R_{\mathfrak {p}}={\mathfrak {p}}\otimes _{R}R_{\mathfrak {p}}}is the uniquemaximal idealof the ringRp.{\displaystyle R_{\mathfrak {p}}.}Analogously one can define the localization of a moduleMat a prime idealp{\displaystyle {\mathfrak {p}}}ofR. Again, the localizationS−1M{\displaystyle S^{-1}M}is commonly denotedMp{\displaystyle M_{\mathfrak {p}}}. Such localizations are fundamental for commutative algebra and algebraic geometry for several reasons. One is that local rings are often easier to study than general commutative rings, in particular because ofNakayama lemma. However, the main reason is that many properties are true for a ring if and only if they are true for all its local rings. For example, a ring isregularif and only if all its local rings areregular local rings. Properties of a ring that can be characterized on its local rings are calledlocal properties, and are often the algebraic counterpart of geometriclocal propertiesofalgebraic varieties, which are properties that can be studied by restriction to a small neighborhood of each point of the variety. (There is another concept of local property that refers to localization to Zariski open sets; see§ Localization to Zariski open sets, below.) Many local properties are a consequence of the fact that the module is afaithfully flat modulewhen the direct sum is taken over all prime ideals (or over allmaximal idealsofR). See alsoFaithfully flat descent. A propertyPof anR-moduleMis alocal propertyif the following conditions are equivalent: The following are local properties: On the other hand, some properties are not local properties. For example, an infinitedirect productoffieldsis not anintegral domainnor aNoetherian ring, while all its local rings are fields, and therefore Noetherian integral domains. Localizingnon-commutative ringsis more difficult. While the localization exists for every setSof prospective units, it might take a different form to the one described above. One condition which ensures that the localization is well behaved is theOre condition. One case for non-commutative rings where localization has a clear interest is for rings ofdifferential operators. It has the interpretation, for example, of adjoining a formal inverseD−1for a differentiation operatorD. This is done in many contexts in methods fordifferential equations. There is now a large mathematical theory about it, namedmicrolocalization, connecting with numerous other branches. Themicro-tag is to do with connections withFourier theory, in particular.
https://en.wikipedia.org/wiki/Localization_of_a_ring_and_a_module
Inmathematics, acovering groupof atopological groupHis acovering spaceGofHsuch thatGis a topological group and the covering mapp:G→His acontinuousgroup homomorphism. The mappis called thecovering homomorphism. A frequently occurring case is adouble covering group, atopological double coverin whichHhasindex2 inG; examples include thespin groups,pin groups, andmetaplectic groups. Roughly explained, saying that for example the metaplectic group Mp2nis adouble coverof thesymplectic groupSp2nmeans that there are always two elements in the metaplectic group representing one element in the symplectic group. LetGbe a covering group ofH. ThekernelKof the covering homomorphism is just the fiber over the identity inHand is adiscretenormal subgroupofG. The kernelKisclosedinGif and only ifGisHausdorff(and if and only ifHis Hausdorff). Going in the other direction, ifGis any topological group andKis a discrete normal subgroup ofGthen the quotient mapp:G→G/Kis a covering homomorphism. IfGisconnectedthenK, being a discrete normal subgroup, necessarily lies in thecenterofGand is thereforeabelian. In this case, the center ofH=G/Kis given by As with all covering spaces, thefundamental groupofGinjects into the fundamental group ofH. Since the fundamental group of a topological group is always abelian, every covering group is a normal covering space. In particular, ifGispath-connectedthen thequotient groupπ1(H) /π1(G)is isomorphic toK. The groupKactssimply transitively on the fibers (which are just leftcosets) by right multiplication. The groupGis then aprincipalK-bundleoverH. IfGis a covering group ofHthen the groupsGandHarelocally isomorphic. Moreover, given any two connected locally isomorphic groupsH1andH2, there exists a topological groupGwith discrete normal subgroupsK1andK2such thatH1is isomorphic toG/K1andH2is isomorphic toG/K2. LetHbe a topological group and letGbe a covering space ofH. IfGandHare bothpath-connectedandlocally path-connected, then for any choice of elemente* in the fiber overe∈H, there exists a unique topological group structure onG, withe* as the identity, for which the covering mapp:G→His a homomorphism. The construction is as follows. Letaandbbe elements ofGand letfandgbepathsinGstarting ate* and terminating ataandbrespectively. Define a pathh:I→Hbyh(t) =p(f(t))p(g(t)). By the path-lifting property of covering spaces there is a unique lift ofhtoGwith initial pointe*. The productabis defined as the endpoint of this path. By construction we havep(ab) =p(a)p(b). One must show that this definition is independent of the choice of pathsfandg, and also that the group operations are continuous. Alternatively, the group law onGcan be constructed by lifting the group lawH×H→HtoG, using the lifting property of the covering mapG×G→H×H. The non-connected case is interesting and is studied in the papers by Taylor and by Brown-Mucuk cited below. Essentially there is an obstruction to the existence of a universal cover that is also a topological group such that the covering map is a morphism: this obstruction lies in the third cohomology group of the group of components ofGwith coefficients in the fundamental group ofGat the identity. IfHis a path-connected, locally path-connected, andsemilocally simply connectedgroup then it has auniversal cover. By the previous construction the universal cover can be made into a topological group with the covering map a continuous homomorphism. This group is called theuniversal covering groupofH. There is also a more direct construction, which we give below. LetPHbe thepath groupofH. That is,PHis the space ofpathsinHbased at the identity together with thecompact-open topology. The product of paths is given by pointwise multiplication, i.e. (fg)(t) =f(t)g(t). This givesPHthe structure of a topological group. There is a natural group homomorphismPH→Hthat sends each path to its endpoint. The universal cover ofHis given as the quotient ofPHby the normal subgroup ofnull-homotopicloops. The projectionPH→Hdescends to the quotient giving the covering map. One can show that the universal cover issimply connectedand the kernel is just thefundamental groupofH. That is, we have ashort exact sequence where~His the universal cover ofH. Concretely, the universal covering group ofHis the space of homotopy classes of paths inHwith pointwise multiplication of paths. The covering map sends each path class to its endpoint. As the above suggest, if a group has a universal covering group (if it is path-connected, locally path-connected, and semilocally simply connected), with discrete center, then the set of all topological groups that are covered by the universal covering group form a lattice, corresponding to the lattice of subgroups of the center of the universal covering group: inclusion of subgroups corresponds to covering of quotient groups. The maximal element is the universal covering group~H, while the minimal element is the universal covering group mod its center,~H/ Z(~H). This corresponds algebraically to theuniversal perfect central extension(called "covering group", by analogy) as the maximal element, and a group mod its center as minimal element. This is particularly important for Lie groups, as these groups are all the (connected) realizations of a particular Lie algebra. For many Lie groups the center is the group of scalar matrices, and thus the group mod its center is the projectivization of the Lie group. These covers are important in studyingprojective representationsof Lie groups, andspin representationslead to the discovery ofspin groups: a projective representation of a Lie group need not come from a linear representation of the group, but does come from a linear representation of some covering group, in particular the universal covering group. The finite analog led to the covering group or Schur cover, as discussed above. A key example arises fromSL2(R), which has center {±1} and fundamental group Z. It is a double cover of the centerlessprojective special linear groupPSL2(R), which is obtained by taking the quotient by the center. ByIwasawa decomposition, both groups are circle bundles over the complex upper half-plane, and their universal coverSL2(~R){\displaystyle {\mathrm {S} {\widetilde {\mathrm {L} _{2}(}}\mathbf {R} )}}is a real line bundle over the half-plane that forms one ofThurston's eight geometries. Since the half-plane is contractible, all bundle structures are trivial. The preimage of SL2(Z) in the universal cover is isomorphic to thebraid groupon three strands. The above definitions and constructions all apply to the special case ofLie groups. In particular, every covering of amanifoldis a manifold, and the covering homomorphism becomes asmooth map. Likewise, given any discrete normal subgroup of a Lie group the quotient group is a Lie group and the quotient map is a covering homomorphism. Two Lie groups are locally isomorphic if and only if theirLie algebrasare isomorphic. This implies that a homomorphismφ:G→Hof Lie groups is a covering homomorphism if and only if the induced map on Lie algebras is an isomorphism. Since for every Lie algebrag{\displaystyle {\mathfrak {g}}}there is a unique simply connected Lie groupGwith Lie algebra⁠g{\displaystyle {\mathfrak {g}}}⁠, from this follows that the universal covering group of a connected Lie groupHis the (unique) simply connected Lie groupGhaving the same Lie algebra asH.
https://en.wikipedia.org/wiki/Covering_group
Password-based cryptographyis the study of password-based key encryption, decryption, and authorization. It generally refers two distinct classes of methods: Some systems attempt to derive a cryptographic key directly from a password. However, such practice is generally ill-advised when there is a threat ofbrute-force attack. Techniques to mitigate such attack includepassphrasesand iterated (deliberately slow) password-based key derivation functions such asPBKDF2(RFC 2898). Password-authenticated key agreementsystems allow two or more parties that agree on a password (or password-related data) to derive shared keys without exposing the password or keys to network attack.[1]Earlier generations ofchallenge–response authenticationsystems have also been used with passwords, but these have generally been subject to eavesdropping and/or brute-force attacks on the password. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Password-based_cryptography
Instatistics, theBonferroni correctionis a method to counteract themultiple comparisons problem. The method is named for its use of theBonferroni inequalities.[1]Application of the method toconfidence intervalswas described byOlive Jean Dunn.[2] Statistical hypothesis testingis based on rejecting thenull hypothesiswhen the likelihood of the observed data would be low if the null hypothesis were true. If multiple hypotheses are tested, the probability of observing a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making aType I error) increases.[3] The Bonferroni correction compensates for that increase by testing each individual hypothesis at a significance level ofα/m{\displaystyle \alpha /m}, whereα{\displaystyle \alpha }is the desired overall alpha level andm{\displaystyle m}is the number of hypotheses.[4]For example, if a trial is testingm=20{\displaystyle m=20}hypotheses with a desired overallα=0.05{\displaystyle \alpha =0.05}, then the Bonferroni correction would test each individual hypothesis atα=0.05/20=0.0025{\displaystyle \alpha =0.05/20=0.0025}. The Bonferroni correction can also be applied as a p-value adjustment: Using that approach, instead of adjusting the alpha level, each p-value is multiplied by the number of tests (with adjusted p-values that exceed 1 then being reduced to 1), and the alpha level is left unchanged. The significance decisions using this approach will be the same as when using the alpha-level adjustment approach. LetH1,…,Hm{\displaystyle H_{1},\ldots ,H_{m}}be a family of null hypotheses and letp1,…,pm{\displaystyle p_{1},\ldots ,p_{m}}be their correspondingp-values. Letm{\displaystyle m}be the total number of null hypotheses, and letm0{\displaystyle m_{0}}be the number of true null hypotheses (which is presumably unknown to the researcher). Thefamily-wise error rate(FWER) is the probability of rejecting at least one trueHi{\displaystyle H_{i}}, that is, of making at least onetype I error. The Bonferroni correction rejects the null hypothesis for eachpi≤αm{\displaystyle p_{i}\leq {\frac {\alpha }{m}}}, thereby controlling theFWERat≤α{\displaystyle \leq \alpha }. Proof of this control follows fromBoole's inequality, as follows: This control does not require any assumptions about dependence among the p-values or about how many of the null hypotheses are true.[5] Rather than testing each hypothesis at theα/m{\displaystyle \alpha /m}level, the hypotheses may be tested at any other combination of levels that add up toα{\displaystyle \alpha }, provided that the level of each test is decided before looking at the data.[6]For example, for two hypothesis tests, an overallα{\displaystyle \alpha }of 0.05 could be maintained by conducting one test at 0.04 and the other at 0.01. The procedure proposed by Dunn[2]can be used to adjustconfidence intervals. If one establishesm{\displaystyle m}confidence intervals, and wishes to have an overall confidence level of1−α{\displaystyle 1-\alpha }, each individual confidence interval can be adjusted to the level of1−αm{\displaystyle 1-{\frac {\alpha }{m}}}.[2] When searching for a signal in a continuous parameter space there can also be a problem of multiple comparisons, or look-elsewhere effect. For example, a physicist might be looking to discover a particle of unknown mass by considering a large range of masses; this was the case during the Nobel Prize winning detection of theHiggs boson. In such cases, one can apply a continuous generalization of the Bonferroni correction by employingBayesianlogic to relate the effective number of trials,m{\displaystyle m}, to the prior-to-posterior volume ratio.[7] There are alternative ways to control thefamily-wise error rate. For example, theHolm–Bonferroni methodand theŠidák correctionare universally more powerful procedures than the Bonferroni correction, meaning that they are always at least as powerful. But unlike the Bonferroni procedure, these methods do not control theexpected numberof Type I errors per family (the per-family Type I error rate).[8] With respect toFamily-wise error rate (FWER)control, the Bonferroni correction can be conservative if there are a large number of tests and/or the test statistics are positively correlated.[9] Multiple-testing corrections, including the Bonferroni procedure, increase the probability ofType II errorswhen null hypotheses are false, i.e., they reducestatistical power.[10][9]
https://en.wikipedia.org/wiki/Bonferroni_correction
Bernstein v. United Stateswas a series of court cases filed byDaniel J. Bernstein, then a mathematics Ph.D. student at theUniversity of California, Berkeley, challenging U.S. government restrictions on theexport of cryptographic software. In the early 1990s, the U.S. government classified encryption software as a "munition," imposing strict export controls. As a result, Bernstein was required to register as an arms dealer and obtain an export license before he could publish his encryption software online. With the support of theElectronic Frontier Foundation (EFF), Bernstein filed a lawsuit against the U.S. government, arguing that the export controls violated hisFirst Amendmentrights. The case ultimately led to a relaxation of export restrictions on cryptography, which facilitated the development of secure international e-commerce. The decision has been recognized by First Amendment and technology advocacy groups for affirming a "right to code" and applying First Amendment protections to code as aform of expression.[1][2] The case was first brought in 1995, when Bernstein was a student atUniversity of California, Berkeley, and wanted to publish a paper and associatedsource codeon hisSnuffleencryption system. Bernstein was represented by theElectronic Frontier Foundation, who hired outside lawyerCindy Cohnand also obtainedpro bono publicoassistance from Lee Tien of Berkeley; M. Edward Ross of the San Francisco law firm of Steefel, Levitt & Weiss; James Wheaton and Elizabeth Pritzker of the First Amendment Project in Oakland; and Robert Corn-Revere, Julia Kogan, and Jeremy Miller of the Washington, DC, law firm of Hogan & Hartson. After four years and one regulatory change, theNinth Circuit Court of Appealsruled thatsoftwaresource codewas speech protected by theFirst Amendmentand that the government's regulations preventing its publication were unconstitutional.[3]Regarding those regulations, theEFFstates: Years before, the government had placed encryption, a method for scrambling messages so they can only be understood by their intended recipients, on theUnited States Munitions List, alongside bombs andflamethrowers, as a weapon to be regulated for national security purposes. Companies and individuals exporting items on the munitions list, including software with encryption capabilities, had to obtain priorState Departmentapproval. The government requesteden bancreview.[5]InBernstein v. U.S. Dept. of Justice, 192 F.3d 1308 (9th Cir. 1999), the Ninth Circuit ordered that this case be reheard by theen banccourt, and withdrew the three-judge panel opinion,Bernstein v. U.S. Dept. of Justice, 176 F.3d 1132 (9th Cir. 1999).[6] The government modified the regulations again[when?], substantially loosening them, and Bernstein, then a professor at theUniversity of Illinois at Chicago, challenged them again. This time, he chose to represent himself, although he had no formal legal training. On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it and asked Bernstein to come back when the government made a "concrete threat".[7] Apple citedBernstein v. USin itsrefusal to hack the San Bernardino shooter's iPhone, saying that they could not be compelled to "speak" (write code).[8] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Bernstein_v._United_States
Aprediction(Latinpræ-, "before," anddictum, "something said"[1]) orforecastis a statement about afutureeventor about futuredata. Predictions are often, but not always, based upon experience or knowledge of forecasters. There is no universal agreement about the exact difference between "prediction" and "estimation"; different authors and disciplines ascribe differentconnotations. Future events are necessarilyuncertain, so guaranteed accurate information about the future is impossible. Prediction can be useful to assist in makingplansabout possible developments. In a non-statistical sense, the term "prediction" is often used to refer to aninformed guess or opinion. A prediction of this kind might be informed by a predicting person'sabductive reasoning,inductive reasoning,deductive reasoning, andexperience; and may be useful—if the predicting person is aknowledgeable personin the field.[2] TheDelphi methodis a technique for eliciting such expert-judgement-based predictions in a controlled way. This type of prediction might be perceived as consistent with statistical techniques in the sense that, at minimum, the "data" being used is the predicting expert'scognitive experiencesforming anintuitive"probability curve." Instatistics, prediction is a part ofstatistical inference. One particular approach to such inference is known aspredictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one possible description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time. When information is transferred across time, often to specific points in time, the process is known asforecasting.[3][failed verification]Forecasting usually requirestime seriesmethods, while prediction is often performed oncross-sectional data. Statistical techniques used for prediction includeregressionand its various sub-categories such aslinear regression,generalized linear models(logistic regression,Poisson regression,Probit regression), etc. In case of forecasting,autoregressive moving average modelsandvector autoregressionmodels can be utilized. When these and/or related, generalized set of regression ormachine learningmethods are deployed in commercial usage, the field is known aspredictive analytics.[4] In many applications, such as time series analysis, it is possible to estimate the models that generate the observations. If models can be expressed astransfer functionsor in terms of state-space parameters then smoothed, filtered and predicted data estimates can be calculated.[citation needed]If the underlying generating models are linear then a minimum-varianceKalman filterand a minimum-variance smoother may be used to recover data of interest from noisy measurements. These techniques rely on one-step-ahead predictors (which minimise the variance of theprediction error). When the generating models are nonlinear then stepwise linearizations may be applied withinExtended Kalman Filterand smoother recursions. However, in nonlinear cases, optimum minimum-variance performance guarantees no longer apply.[5] To use regression analysis for prediction, data are collected on the variable that is to be predicted, called thedependent variableor response variable, and on one or more variables whose values arehypothesizedto influence it, calledindependent variablesor explanatory variables. Afunctional form, often linear, is hypothesized for the postulated causal relationship, and theparametersof the function areestimatedfrom the data—that is, are chosen so as to optimize is some way thefitof the function, thus parameterized, to the data. That is the estimation step. For the prediction step, explanatory variable values that are deemed relevant to future (or current but not yet observed) values of the dependent variable are input to the parameterized function to generate predictions for the dependent variable.[6] An unbiased performance estimate of a model can be obtained onhold-out test sets. The predictions can visually be compared to the ground truth in aparity plot. In science, a prediction is a rigorous, often quantitative, statement, forecasting what would be observed under specific conditions; for example, according to theories ofgravity, if an apple fell from a tree it would be seen to move towards the center of the Earth with a specified and constantacceleration. Thescientific methodis built on testing statements that arelogical consequencesof scientific theories. This is done through repeatableexperimentsor observational studies. Ascientific theorywhose predictions are contradicted by observations and evidence will be rejected. New theories that generate many new predictions can more easily be supported orfalsified(seepredictive power). Notions that make notestablepredictions are usually considered not to be part of science (protoscienceornescience) until testable predictions can be made. Mathematical equationsandmodels, andcomputer models, are frequently used to describe the past and future behaviour of a process within the boundaries of that model. In some cases theprobabilityof an outcome, rather than a specific outcome, can be predicted, for example in much ofquantum physics. Inmicroprocessors,branch predictionpermits avoidance ofpipelineemptying atbranch instructions. Inengineering, possiblefailure modesare predicted and avoided by correcting thefailure mechanismcausing the failure. Accurate prediction and forecasting are very difficult in some areas, such asnatural disasters,pandemics,demography,population dynamicsandmeteorology.[7]For example, it is possible to predict the occurrence ofsolar cycles, but their exact timing and magnitude is much more difficult (see picture to right). In materials engineering it is also possible to predict the life time of a material with a mathematical model.[8] Inmedicalscience predictive and prognosticbiomarkerscan be used to predict patient outcomes in response to various treatment or the probability of a clinical event.[9] Established science makes useful predictions which are often extremely reliable and accurate; for example,eclipsesare routinely predicted. New theories make predictions which allow them to be disproved by reality. For example, predicting the structure of crystals at the atomic level is a current research challenge.[10]In the early 20th century the scientific consensus was that there existed an absoluteframe of reference, which was given the nameluminiferous ether. The existence of this absolute frame was deemed necessary for consistency with the established idea that the speed of light is constant. The famousMichelson–Morley experimentdemonstrated that predictions deduced from this concept were not borne out in reality, thus disproving the theory of an absolute frame of reference. Thespecial theory of relativitywas proposed by Einstein as an explanation for the seeming inconsistency between the constancy of the speed of light and the non-existence of a special, preferred or absolute frame of reference. Albert Einstein's theory ofgeneral relativitycould not easily be tested as it did not produce any effects observable on a terrestrial scale. However, as one of the firsttests of general relativity, the theory predicted that large masses such asstarswould bend light, in contradiction to accepted theory; this was observed in a 1919 eclipse. Predictive medicineis a field ofmedicinethat entails predicting theprobabilityofdiseaseand instituting preventive measures in order to either prevent the disease altogether or significantly decrease its impact upon the patient (such as by preventingmortalityor limitingmorbidity).[11] While different prediction methodologies exist, such asgenomics,proteomics, andcytomics, the most fundamental way to predict future disease is based on genetics. Although proteomics and cytomics allow for the early detection of disease, much of the time those detectbiological markersthat exist because a disease process hasalreadystarted. However, comprehensivegenetic testing(such as through the use ofDNA arraysorfull genome sequencing) allows for the estimation of disease risk years to decades before any disease even exists, or even whether a healthyfetusis at higher risk for developing a disease in adolescence or adulthood. Individuals who are more susceptible to disease in the future can be offered lifestyle advice or medication with the aim of preventing the predicted illness. Prognosis(Greek: πρόγνωσις "fore-knowing, foreseeing";pl.: prognoses) is a medical term for predicting the likelihood or expected development of adisease, including whether thesignsand symptoms will improve or worsen (and how quickly) or remain stable over time; expectations ofquality of life, such as the ability to carry out daily activities; the potential for complications and associated health issues; and the likelihood of survival (including life expectancy).[13][14]A prognosis is made on the basis of the normal course of the diagnosed disease, the individual's physical and mental condition, the available treatments, and additional factors.[14]A complete prognosis includes the expected duration, function, and description of the course of the disease, such as progressive decline, intermittent crisis, or sudden, unpredictable crisis.[15] Aclinical prediction ruleor clinical probability assessment specifieshow to use medical signs,symptoms, and other findings to estimate the probability of a specific disease or clinical outcome.[17] Mathematical models ofstock marketbehaviour (and economic behaviour in general) are also unreliable in predicting future behaviour. Among other reasons, this is because economic events may span several years, and the world is changing over a similar time frame, thus invalidating the relevance of past observations to the present. Thus there are an extremely small number (of the order of 1) of relevant past data points from which to project the future. In addition, it is generally believed that stock market prices already take into account all the information available to predict the future, and subsequent movements must therefore be the result of unforeseen events. Consequently, it is extremely difficult for astock investortoanticipateor predict astock market boom, or astock market crash. In contrast to predicting the actual stock return, forecasting of broadeconomic trendstends to have better accuracy. Such analysis is provided by both non-profit groups as well as by for-profit private institutions.[citation needed] Some correlation has been seen between actual stock market movements and prediction data from large groups in surveys and prediction games. Anactuaryusesactuarial scienceto assess and predict future businessrisk, such that the risk(s) can bemitigated. For example, ininsurancean actuary would use alife table(which incorporates the historical experience of mortality rates and sometimes an estimate of future trends) to projectlife expectancy. Predicting the outcome of sporting events is a business which has grown in popularity in recent years. Handicappers predict the outcome of games using a variety of mathematical formulas, simulation models orqualitative analysis. Early, well known sports bettors, such asJimmy the Greek, were believed to have access to information that gave them an edge. Information ranged from personal issues, such as gambling or drinking to undisclosed injuries; anything that may affect the performance of a player on the field. Recent times have changed the way sports are predicted. Predictions now typically consist of two distinct approaches: Situational plays and statistical based models. Situational plays are much more difficult to measure because they usually involve the motivation of a team. Dan Gordon, noted handicapper, wrote "Without an emotional edge in a game in addition to value in a line, I won't put my money on it".[19]These types of plays consist of: Betting on the home underdog, betting against Monday Night winners if they are a favorite next week, betting the underdog in "look ahead" games etc. As situational plays become more widely known they become less useful because they will impact the way the line is set. The widespread use of technology has brought with it more modernsports betting systems. These systems are typically algorithms and simulation models based onregression analysis.Jeff Sagarin, a sports statistician, has brought attention to sports by having the results of his models published in USA Today. He is currently paid as a consultant by theDallas Mavericksfor his advice on lineups and the use of his Winval system, which evaluates free agents.Brian Burke, a formerNavyfighter pilot turned sports statistician, has published his results of using regression analysis to predict the outcome of NFL games.[20]Ken Pomeroyis widely accepted as a leading authority on college basketball statistics. His website includes his College Basketball Ratings, a tempo based statistics system. Some statisticians have become very famous for having successful prediction systems. Dare wrote "the effective odds for sports betting and horse racing are a direct result of human decisions and can therefore potentially exhibit consistent error".[21]Unlike other games offered in a casino, prediction in sporting events can be both logical and consistent. Other more advance models include those based on Bayesian networks, which are causal probabilistic models commonly used for risk analysis and decision support. Based on this kind of mathematical modelling, Constantinou et al.,[22][23]have developed models for predicting the outcome of association football matches.[24]What makes these models interesting is that, apart from taking into consideration relevant historical data, they also incorporate all these vague subjective factors, like availability of key players, team fatigue, team motivation and so on. They provide the user with the ability to include their best guesses about things that there are no hard facts available. This additional information is then combined with historical facts to provide a revised prediction for future match outcomes. The initial results based on these modelling practices are encouraging since they have demonstrated consistent profitability against published market odds. Nowadays sport betting is a huge business; there are many websites (systems) alongside betting sites, which give tips or predictions for future games.[25]Some of these prediction websites (tipsters) are based on human predictions, but others on computer software sometimes called prediction robots or bots. Prediction bots can use different amount of data and algorithms and because of that their accuracy may vary. These days, with the development of artificial intelligence, it has become possible to create more consistent predictions using statistics. Especially in the field of sports competitions, the impact of artificial intelligence has created a noticeable consistency rate. On the science ofAI soccer predictions, an initiative called soccerseer.com, one of the most successful systems in this sense, manages to predict the results of football competitions with up to 75% accuracy with artificial intelligence. Prediction in the non-economic social sciences differs from the natural sciences and includes multiple alternative methods such as trend projection, forecasting, scenario-building and Delphi surveys. The oil company Shell is particularly well known for its scenario-building activities.[citation needed] One reason for the peculiarity of societal prediction is that in the social sciences, "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process".[26]As a consequence, societal predictions can become self-destructing. For example, a forecast that a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more security cybersecurity measures, thus limiting the issue.[26] Inpoliticsit is common to attempt to predict the outcome ofelectionsviapolitical forecastingtechniques (or assess the popularity ofpoliticians) through the use ofopinion polls.Prediction gameshave been used by many corporations and governments to learn about the most likely outcome of future events. Predictions have often been made, from antiquity until the present, by usingparanormalorsupernaturalmeans such asprophecyor by observingomens. Methods includingwater divining,astrology,numerology,fortune telling,interpretation of dreams, and many other forms ofdivination, have been used for millennia to attempt to predict the future. These means of prediction have not been proven by scientific experiments. In literature, vision and prophecy are literary devices used to present a possible timeline of future events. They can be distinguished by vision referring to what an individual sees happen. Thebook of Revelation, in theNew Testament, thus uses vision as a literary device in this regard. It is also prophecy or prophetic literature when it is related by an individual in asermonor other public forum. Divinationis the attempt to gain insight into a question or situation by way of an occultic standardized process or ritual.[27]It is an integral part of witchcraft and has been used in various forms for thousands of years. Diviners ascertain their interpretations of how a querent should proceed by reading signs, events, oromens, or through alleged contact with asupernaturalagency, most often described as an angel or a god though viewed by Christians and Jews as a fallen angel or demon.[28] Fiction (especially fantasy,forecastingand science fiction) often features instances of prediction achieved by unconventional means. Science fiction of the pastpredicted various modern technologies. In fantasy literature, predictions are often obtained throughmagicorprophecy, sometimes referring back to old traditions. For example, inJ. R. R. Tolkien'sThe Lord of the Rings, many of the characters possess an awareness of events extending into the future, sometimes as prophecies, sometimes as more-or-less vague 'feelings'. The characterGaladriel, in addition, employs a water "mirror" to show images, sometimes of possible future events. In some ofPhilip K. Dick's stories, mutant humans calledprecogscan foresee the future (ranging from days to years). In the story calledThe Golden Man, an exceptional mutant can predict the future to an indefinite range (presumably up to his death), and thus becomes completely non-human, an animal that follows the predicted paths automatically. Precogs also play an essential role in another of Dick's stories,The Minority Report, which was turned into afilmbySteven Spielbergin 2002. In theFoundationseries byIsaac Asimov, a mathematician finds out that historical events (up to some detail) can be theoretically modelled using equations, and then spends years trying to put the theory in practice. The new science ofpsychohistoryfounded upon his success can simulate history and extrapolate the present into the future. InFrank Herbert's sequels to 1965'sDune, his characters are dealing with the repercussions of being able to see the possible futures and select amongst them. Herbert sees this as a trap of stagnation, and his characters follow a so-called "Golden Path" out of the trap. InUrsula K. Le Guin'sThe Left Hand of Darkness, the humanoid inhabitants of planet Gethen have mastered the art of prophecy and routinely produce data on past, present or future events on request. In this story, this was a minor plot device. For the ancients, prediction, prophesy, and poetry were often intertwined.[29]Prophecies were given in verse, and a word for poet in Latin is “vates” or prophet.[29]Both poets and prophets claimed to be inspired by forces outside themselves. In contemporary cultures, theological revelation and poetry are typically seen as distinct and often even as opposed to each other. Yet the two still are often understood together as symbiotic in their origins, aims, and purposes.[30]
https://en.wikipedia.org/wiki/Prediction
The following is a generalcomparison of OTP applicationsthat are used to generateone-time passwordsfortwo-factor authentication(2FA) systems using thetime-based one-time password(TOTP) or theHMAC-based one-time password(HOTP) algorithms. by 2Stable[45]
https://en.wikipedia.org/wiki/Comparison_of_OTP_applications
Inmathematics,Nevanlinna's criterionincomplex analysis, proved in 1920 by the Finnish mathematicianRolf Nevanlinna, characterizesholomorphicunivalent functionson theunit diskwhich arestarlike. Nevanlinna used this criterion to prove theBieberbach conjecturefor starlike univalent functions. A univalent functionhon the unit disk satisfyingh(0) = 0 andh'(0) = 1 is starlike, i.e. has image invariant under multiplication by real numbers in [0,1], if and only ifzh′(z)/h(z){\displaystyle zh^{\prime }(z)/h(z)}has positive real part for |z| < 1 and takes the value 1 at 0. Note that, by applying the result toa•h(rz), the criterion applies on any disc |z| < r with only the requirement thatf(0) = 0 andf'(0) ≠ 0. Leth(z) be a starlike univalent function on |z| < 1 withh(0) = 0 andh'(0) = 1. Fort< 0, define[1] a semigroup of holomorphic mappings ofDinto itself fixing 0. Moreoverhis theKoenigs functionfor the semigroupft. By theSchwarz lemma, |ft(z)| decreases astincreases. Hence But, settingw=ft(z), where Hence and so, dividing by |w|2, Taking reciprocals and lettingtgo to 0 gives for all |z| < 1. Since the left hand side is aharmonic function, themaximum principleimplies the inequality is strict. Conversely if has positive real part andg(0) = 1, thenhcan vanish only at 0, where it must have a simple zero. Now Thus asztraces the circlez=reiθ{\displaystyle z=re^{i\theta }}, the argument of the imageh(reiθ){\displaystyle h(re^{i\theta })}increases strictly. By theargument principle, sinceh{\displaystyle h}has a simple zero at 0, it circles the origin just once. The interior of the region bounded by the curve it traces is therefore starlike. Ifais a point in the interior then the number of solutionsN(a) ofh(z)=awith |z| <ris given by Since this is an integer, depends continuously onaandN(0) = 1, it is identically 1. Sohis univalent and starlike in each disk |z| <rand hence everywhere. Constantin Carathéodoryproved in 1907 that if is a holomorphic function on the unit diskDwith positive real part, then[2][3] In fact it suffices to show the result withgreplaced bygr(z) =g(rz) for anyr< 1 and then pass to the limitr= 1. In that casegextends to a continuous function on the closed disc with positive real part and bySchwarz formula Using the identity it follows that so defines a probability measure, and Hence Let be a univalent starlike function in |z| < 1.Nevanlinna (1921)proved that In fact by Nevanlinna's criterion has positive real part for |z|<1. So by Carathéodory's lemma On the other hand gives the recurrence relation wherea1= 1. Thus so it follows by induction that
https://en.wikipedia.org/wiki/Nevanlinna%27s_criterion
Geopositioningis the process of determining or estimating thegeographic positionof an object or a person.[1]Geopositioning yields a set ofgeographic coordinates(such aslatitudeandlongitude) in a givenmap datum. Geographic positions may also be expressed indirectly, as a distance inlinear referencingor as a bearing and range from a known landmark. In turn, positions can determine a meaningful location, such as astreet address. Geoposition is sometimes referred to asgeolocation, and the process of geopositioning may also be described asgeo-localization. Specific instances include: Geofencinginvolves creating a virtual geographic boundary (ageofence), enabling software to trigger a response when a device enters or leaves a particular area.[3]Geopositioning is a pre-requisite for geofencing. Geopositioning uses various visual andelectronicmethods includingposition linesandposition circles,celestial navigation,radio navigation,radio and WiFi positioning systems, and the use ofsatellite navigation systems. The calculation requires measurements or observations of distances or angles to reference points whose positions are known. In 2D surveys, observations of three reference points are enough to compute a position in atwo-dimensionalplane. In practice, observations are subject to errors resulting from various physical and atmospheric factors that influence the measurement of distances and angles.[4] A practical example of obtaining a position fix would be for a ship to takebearingmeasurements on threelighthousespositioned along the coast. These measurements could be made visually using ahand bearing compass, or in case of poor visibility, electronically usingradarorradio direction finding. Since all physical observations are subject to errors, the resulting position fix is also subject to inaccuracy. Although in theory two lines of position (LOP) are enough to define a point, in practice 'crossing' more LOPs provides greater accuracy and confidence, especially if the lines cross at a good angle to each other. Three LOPs are considered the minimum for a practical navigational fix.[5]The three LOPs when drawn on the chart will in general form a triangle, known as a 'cocked hat'. The navigator will have more confidence in a position fix that is formed by a small cocked hat with angles close to those of anequilateral triangle.[6]The area of doubt surrounding a position fix is called anerror ellipse. To minimize the error,electronic navigationsystems generally use more than three reference points to compute a position fix to increase thedata redundancy. As more redundant reference points are added, the position fix becomes more accurate and the area of the resulting error ellipse decreases.[7] The process of using 3 reference points to calculate the location is calledTrilateration, and when using more than 3 points,multilateration. Combining multiple observations to compute a position fix is equivalent to solving a system oflinear equations. Navigation systems useregression algorithmssuch asleast squaresin order to compute a position fix in 3D space. This is most commonly done by combining distance measurements to 4 or moreGPSsatellites, which orbit the Earth along known paths.[8] The result of position fixing is called aposition fix(PF), or simply afix, a position derived from measuring in relation to external reference points.[9]In nauticalnavigation, the term is generally used with manual or visual techniques, such as the use of intersecting visual or radioposition lines, rather than the use of more automated and accurate electronic methods likeGPS; in aviation, use of electronic navigation aids is more common. A visual fix can be made by using any sighting device with abearingindicator. Two or more objects of known position are sighted, and the bearings recorded. Bearing lines are then plotted on a chart through the locations of the sighted items. The intersection of these lines is the current position of the vessel. Usually, a fix is where two or more position lines intersect at any given time. If three position lines can be obtained, the resulting "cocked hat", where the three lines do not intersect at the same point, but create a triangle, gives the navigator an indication of the accuracy. The most accurate fixes occur when the position lines are perpendicular to each other. Fixes are a necessary aspect of navigation bydead reckoning, which relies on estimates ofspeedandcourse. The fix confirms the actual position during a journey. A fix can introduce inaccuracies if the reference point is not correctly identified or is inaccurately measured. Geopositioning can be referred to both global positioning and outdoor positioning, using for exampleGPS, and to indoor positioning, for all the situations where satellite GPS is not a viable option and the localization process has to happen indoors. For indoor positioning, tracking and localization there are many technologies that can be used, depending on the specific needs and on the environmental characteristics.[10]
https://en.wikipedia.org/wiki/Geopositioning
Afield-programmable gate array(FPGA) is a type of configurableintegrated circuitthat can be repeatedly programmed after manufacturing. FPGAs are a subset of logic devices referred to asprogrammable logic devices(PLDs). They consist of an array ofprogrammablelogic blockswith a connecting grid, that can be configured "in the field" to interconnect with other logic blocks to perform various digital functions. FPGAs are often used in limited (low) quantity production of custom-made products, and in research and development, where the higher cost of individual FPGAs is not as important, and where creating and manufacturing a custom circuit would not be feasible. Other applications for FPGAs include the telecommunications, automotive, aerospace, and industrial sectors, which benefit from their flexibility, high signal processing speed, and parallel processing abilities. A FPGA configuration is generally written using ahardware description language(HDL) e.g.VHDL, similar to the ones used forapplication-specific integrated circuits(ASICs).Circuit diagramswere formerly used to write the configuration. The logic blocks of an FPGA can be configured to perform complexcombinational functions, or act as simplelogic gateslikeANDandXOR. In most FPGAs, logic blocks also includememory elements, which may be simpleflip-flopsor more sophisticated blocks of memory.[1]Many FPGAs can be reprogrammed to implement differentlogic functions, allowing flexiblereconfigurable computingas performed incomputer software. FPGAs also have a role inembedded systemdevelopment due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture.[2] FPGAs are also commonly used during the development of ASICs to speed up the simulation process. The FPGA industry sprouted fromprogrammable read-only memory(PROM) andprogrammable logic devices(PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable).[3] Alterawas founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on thedieto erase theEPROMcells that held the device configuration.[4] Xilinxproduced the first commercially viable field-programmablegate arrayin 1985[3]– the XC2064.[5]The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market.[6]The XC2064 had 64 configurable logic blocks (CLBs), with two three-inputlookup tables(LUTs).[7] In 1987, theNaval Surface Warfare Centerfunded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.[3] Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (laterMicrosemi, nowMicrochip) was serving about 18 percent of the market.[6] The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used intelecommunicationsandnetworking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.[8] By 2013, Altera (31 percent), Xilinx (36 percent) and Actel (10 percent) together represented approximately 77 percent of the FPGA market.[9] Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like thedata centersthat operate theirBing search engine), due to theperformance per wattadvantage FPGAs deliver.[10]Microsoft began using FPGAs toaccelerateBing in 2014, and in 2018 began deploying FPGAs across other data center workloads for theirAzurecloud computingplatform.[11] The following timelines indicate progress in different aspects of FPGA design. Adesign startis a new custom design for implementation on an FPGA. Contemporary FPGAs have amplelogic gatesand RAM blocks to implement complex digital computations. FPGAs can be used to implement any logical function that anASICcan perform. The ability to update the functionality after shipping,partial re-configurationof a portion of the design[18]and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.[1] As FPGA designs employ very fast I/O rates and bidirectional databuses, it becomes a challenge to verify correct timing of valid data within setup time and hold time.[19]Floor planninghelps resource allocation within FPGAs to meet these timing constraints. Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmableslew rateon each output pin. This allows the user to set low rates on lightly loaded pins that would otherwiseringorcoupleunacceptably, and to set higher rates on heavily loaded high-speed channels that would otherwise run too slowly.[20][21]Also common are quartz-crystal oscillatordriver circuitry, on-chipRC oscillators, andphase-locked loopswith embeddedvoltage-controlled oscillatorsused for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differentialcomparatorson input pins designed to be connected todifferential signalingchannels. A fewmixed signalFPGAs have integrated peripheralanalog-to-digital converters(ADCs) anddigital-to-analog converters(DACs) with analog signal conditioning blocks, allowing them to operate as asystem on a chip(SoC).[22]Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, andfield-programmable analog array(FPAA), which carries analog values on its internal programmable interconnect fabric. The most common FPGA architecture consists of an array oflogic blockscalled configurable logic blocks (CLBs) or logic array blocks (LABs) (depending on vendor),I/O pads, and routing channels.[1]Generally, all the routing channels have the same width (number of signals). Multiple I/O pads may fit into the height of one row or the width of one column in the array. "An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing channels needed may vary considerably even among designs with the same amount of logic. For example, acrossbar switchrequires much more routing than asystolic arraywith the same gate count. Since unused routing channels increase the cost (and decrease the performance) of the FPGA without providing any benefit, FPGA manufacturers try to provide just enough channels so that most designs that will fit in terms oflookup tables(LUTs) and I/Os can berouted. This is determined by estimates such as those derived fromRent's ruleor by experiments with existing designs."[23] In general, a logic block consists of a few logical cells. A typical cell consists of a 4-input LUT, afull adder(FA) and aD-type flip-flop. The LUT might be split into two 3-input LUTs. Innormal modethose are combined into a 4-input LUT through the firstmultiplexer(mux). Inarithmeticmode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be eithersynchronousorasynchronous, depending on the programming of the third mux. In practice, the entire adder or parts of it arestored as functionsinto the LUTs in order to savespace.[24][25][26] Modern FPGA families expand upon the above capabilities to include higher-level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased performance compared to building them from logical primitives. Examples of these includemultipliers, genericDSP blocks,embedded processors, high-speed I/O logic and embeddedmemories. Higher-end FPGAs can contain high-speedmulti-gigabit transceiversandhard IP coressuch asprocessor cores,Ethernetmedium access control units,PCIorPCI Expresscontrollers, and externalmemory controllers. These cores exist alongside the programmable fabric, but they are built out oftransistorsinstead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high-performancesignal conditioningcircuitry along with high-speed serializers and deserializers, components that cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such asline codingmay or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA. An alternate approach to using hard macro processors is to make use ofsoft processorIP coresthat are implemented within the FPGA logic.Nios II,MicroBlazeandMico32are examples of popular softcore processors. Many modern FPGAs are programmed atrun time, which has led to the idea ofreconfigurable computingor reconfigurable systems –CPUsthat reconfigure themselves to suit the task at hand. Additionally, new non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip. In 2012 the coarse-grained architectural approach was taken a step further by combining thelogic blocksand interconnects of traditional FPGAs with embeddedmicroprocessorsand related peripherals to form a completesystem on a programmable chip. Examples of such hybrid technologies can be found in theXilinxZynq-7000 allProgrammable SoC,[27]which includes a 1.0GHzdual-coreARM Cortex-A9MPCore processorembeddedwithin the FPGA's logic fabric,[28]or in theAlteraArria V FPGA, which includes an 800 MHzdual-coreARM Cortex-A9MPCore. TheAtmelFPSLIC is another such device, which uses anAVRprocessor in combination with Atmel's programmable logic architecture. TheMicrosemiSmartFusiondevices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB offlashand 64 kB of RAM) and analogperipheralssuch as a multi-channelanalog-to-digital convertersanddigital-to-analog convertersin theirflash memory-based FPGA fabric.[citation needed] Most of the logic inside of an FPGA issynchronous circuitrythat requires aclock signal. FPGAs contain dedicated global and regional routing networks for clock and reset, typically implemented as anH tree, so they can be delivered with minimalskew. FPGAs may contain analogphase-locked loopordelay-locked loopcomponents to synthesize newclock frequenciesand managejitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separateclock domains. These clock signals can be generated locally by an oscillator or they can be recovered from adata stream. Care must be taken when buildingclock domain crossingcircuitry to avoidmetastability. Some FPGAs containdual port RAMblocks that are capable of working with different clocks, aiding in the construction of buildingFIFOsand dual port buffers that bridge clock domains. To shrink the size and power consumption of FPGAs, vendors such asTabulaandXilinxhave introduced3D or stacked architectures.[29][30]Following the introduction of its28 nm7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies. Xilinx's approach stacks several (three or four) active FPGA dies side by side on a siliconinterposer– a single piece of silicon that carries passive interconnect.[30][31]The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called aheterogeneousFPGA.[32] Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other dies and technologies to the FPGA using Intel's embedded multi_die interconnect bridge (EMIB) technology.[33] To define the behavior of the FPGA, the user provides a design in ahardware description language(HDL) or as aschematicdesign. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and itscomponent modules. Using anelectronic design automationtool, a technology-mappednetlistis generated. The netlist can then be fit to the actual FPGA architecture using a process calledplace and route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the results usingtiming analysis,simulation, and otherverification and validationtechniques. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA via aserial interface(JTAG) or to an external memory device such as anEEPROM. The most common HDLs areVHDLandVerilog.National Instruments'LabVIEWgraphical programming language (sometimes referred to asG) has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog has a C-like syntax, unlike VHDL.[34][self-published source?] To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly calledintellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such asOpenCores(typically released underfree and open sourcelicenses such as theGPL,BSDor similar license). Such designs are known asopen-source hardware. In a typicaldesign flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially theRTLdescription inVHDLorVerilogis simulated by creatingtest benchesto simulate the system and observe results. Then, after thesynthesisengine has mapped the design to a netlist, the netlist is translated to agate-leveldescription where simulation is repeated to confirm the synthesis proceeded without errors. Finally, the design is laid out in the FPGA at which pointpropagation delayvalues can beback-annotatedonto the netlist, and the simulation can be run again with these values. More recently,OpenCL(Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in theC programming language.[35]For further information, seehigh-level synthesisandC to HDL. Most FPGAs rely on anSRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example,flash memoryorEEPROMdevices may load contents into internal SRAM that controls routing and logic. The SRAM approach is based onCMOS. Rarer alternatives to the SRAM approach include: In 2016, long-time industry rivalsXilinx(now part ofAMD) andAltera(now part ofİntel) were the FPGA market leaders.[37]At that time, they controlled nearly 90 percent of the market. Both Xilinx (now AMD) and Altera (now Intel) provideproprietaryelectronic design automationsoftware forWindowsandLinux(ISE/VivadoandQuartus) which enables engineers todesign, analyze,simulate, andsynthesize(compile) their designs.[38][39] In March 2010,Tabulaannounced their FPGA technology that usestime-multiplexedlogic and interconnect that claims potential cost savings for high-density applications.[40]On March 24, 2015, Tabula officially shut down.[41] On June 1, 2015, Intel announced it would acquire Altera for approximatelyUS$16.7 billion and completed the acquisition on December 30, 2015.[42] On October 27, 2020, AMD announced it would acquire Xilinx[43]and completed the acquisition valued at about US$50 billion in February 2022.[44] In February 2024 Altera became independent of Intel again.[45] Other manufacturers include: An FPGA can be used to solve any problem which iscomputable. FPGAs can be used to implement asoft microprocessor, such as the XilinxMicroBlazeor AlteraNios II. But their advantage lies in that they are significantly faster for some applications because of theirparallel natureandoptimalityin terms of the number of gates used for certain processes.[51] FPGAs were originally introduced as competitors toCPLDsto implementglue logicforprinted circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as fullsystems on chips(SoCs). Particularly with the introduction of dedicatedmultipliersinto FPGA architectures in the late 1990s, applications that had traditionally been the sole reserve ofdigital signal processors(DSPs) began to use FPGAs instead.[52][53] The evolution of FPGAs has motivated an increase in the use of these devices, whose architecture allows the development of hardware solutions optimized for complex tasks, such as 3D MRI image segmentation, 3D discrete wavelet transform, tomographic image reconstruction, or PET/MRI systems.[54][55]The developed solutions can perform intensive computation tasks with parallel processing, are dynamically reprogrammable, and have a low cost, all while meeting the hard real-time requirements associated with medical imaging. Another trend in the use of FPGAs ishardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a general-purpose processor. The search engineBingis noted for adopting FPGA acceleration for its search algorithm in 2014.[56]As of 2018[update], FPGAs are seeing increased use asAI acceleratorsincluding Microsoft's Project Catapult[11]and for acceleratingartificial neural networksformachine learningapplications. Originally,[when?]FPGAs were reserved for specificvertical applicationswhere the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. Often a custom-made chip would be cheaper if made in larger quantities, but FPGAs may be chosen to quickly bring a product to market. By 2017, new cost and performance dynamics broadened the range of viable applications.[citation needed] Other uses for FPGAs include: FPGAs play a crucial role in modern military communications, especially in systems like theJoint Tactical Radio System(JTRS) and in devices from companies such asThalesandHarris Corporation. Their flexibility and programmability make them ideal for military communications, offering customizable and secure signal processing. In the JTRS, used by the US military, FPGAs provide adaptability and real-time processing, crucial for meeting various communication standards and encryption methods.[63] FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerninghardware security. FPGAs' flexibility makes malicious modifications duringfabricationa lower risk.[64]Previously, for many FPGAs, the designbitstreamwas exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstreamencryptionandauthentication. For example,AlteraandXilinxofferAESencryption (up to 256-bit) for bitstreams stored in an external flash memory.Physical unclonable functions(PUFs) are integrated circuits that have their own unique signatures, due to processing, and can also be used to secure FPGAs while taking up very little hardware space.[65] FPGAs that store their configuration internally in nonvolatile flash memory, such asMicrosemi's ProAsic 3 orLattice's XP2 programmable devices, do not expose the bitstream and do not needencryption. In addition, flash memory for alookup tableprovidessingle event upsetprotection for space applications.[clarification needed]Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such asMicrosemi. With its Stratix 10 FPGAs and SoCs,Alteraintroduced a Secure Device Manager andphysical unclonable functionsto provide high levels of protection against physical attacks.[66] In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that some FPGAs can be vulnerable to hostile intent. They discovered a criticalbackdoorvulnerabilityhad been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto andaccess keys, accessing unencrypted bitstream, modifyinglow-levelsilicon features, and extractingconfigurationdata.[67] In 2020 a critical vulnerability (named "Starbleed") was discovered in all Xilinx 7series FPGAs that rendered bitstream encryption useless. There is no workaround. Xilinx did not produce a hardware revision. Ultrascale and later devices, already on the market at the time, were not affected. Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations.[68] Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fixbugs, and often include shortertime to marketand lowernon-recurring engineeringcosts. Vendors can also take a middle road viaFPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs.[69]Some FPGAs have the capability ofpartial re-configurationthat lets one portion of the device be re-programmed while other portions continue running.[70][71] The primary differences betweencomplex programmable logic devices(CPLDs) and FPGAs arearchitectural. A CPLD has a comparatively restrictive structure consisting of one or more programmablesum-of-productslogic arrays feeding a relatively small number of clockedregisters. As a result, CPLDs are less flexible but have the advantage of more predictabletiming delaysanda higher logic-to-interconnect ratio.[citation needed]FPGA architectures, on the other hand, are dominated byinterconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complexelectronic design automation(EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complexembedded functionssuch asadders,multipliers,memory, andserializer/deserializers. Another common distinction is that CPLDs contain embeddedflash memoryto store their configuration while FPGAs usually require externalnon-volatile memory(but not always). When a design requires simple instant-on(logic is already configured at power-up)CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions and are responsible for "booting" the FPGA as well as controllingresetand boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design.[72]
https://en.wikipedia.org/wiki/Field-programmable_gate_array
TheSecure Hash Algorithmsare a family ofcryptographic hash functionspublished by theNational Institute of Standards and Technology(NIST) as aU.S.Federal Information Processing Standard(FIPS), including: The corresponding standards areFIPSPUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS). In the table below,internal statemeans the "internal hash sum" after each compression of a data block. All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by theCMVP(Cryptographic Module Validation Program), a joint program run by the AmericanNational Institute of Standards and Technology(NIST) and the CanadianCommunications Security Establishment(CSE).
https://en.wikipedia.org/wiki/Secure_Hash_Algorithms
This is a comparison of standards of wireless networking technologies for devices such asmobile phones. A newgenerationof cellular standards has appeared approximately every tenth year since1Gsystems were introduced in 1979 and the early to mid-1980s. Global System for Mobile Communications(GSM, around 80–85% market share) andIS-95(around 10–15% market share) were the two most prevalent 2G mobile communication technologies in 2007.[1]In 3G, the most prevalent technology wasUMTSwithCDMA-2000in close contention. All radio access technologies have to solve the same problems: to divide the finiteRF spectrumamong multiple users as efficiently as possible. GSM usesTDMAandFDMAfor user and cell separation. UMTS, IS-95 and CDMA-2000 useCDMA.WiMAXandLTEuseOFDM. In theory, CDMA, TDMA and FDMA have exactly the same spectral efficiency but practically, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. For a classic example for understanding the fundamental difference of TDMA and CDMA, imagine a cocktail party where couples are talking to each other in a single room. The room represents the available bandwidth: Source:[4] This graphic compares the market shares of the different mobile standards. In a fast-growing market, GSM/3GSM (red) grows faster than the market and is gaining market share, the CDMA family (blue) grows at about the same rate as the market, while other technologies (grey) are being phased out As a reference, a comparison of mobile and non-mobile wireless Internet standards follows. Antenna,RF front endenhancements and minor protocol timer tweaks have helped deploy long rangeP2Pnetworks compromising on radial coverage, throughput and/or spectra efficiency (310 km&382 km) Notes: All speeds are theoretical maximums and will vary by a number of factors, including the use of external antennas, distance from the tower and the ground speed (e.g. communications on a train may be poorer than when standing still). Usually the bandwidth is shared between several terminals. The performance of each technology is determined by a number of constraints, including thespectral efficiencyof the technology, the cell sizes used, and the amount of spectrum available. For more comparison tables, seebit rate progress trends,comparison of mobile phone standards,spectral efficiency comparison tableandOFDM system comparison table.
https://en.wikipedia.org/wiki/Comparison_of_mobile_phone_standards
Thomas J. Fararo(February 11, 1933 - August 20, 2020) was Distinguished ServiceProfessor Emeritusat theUniversity of Pittsburgh. After earning aPh.D.insociologyatSyracuse Universityin 1963, he received a three-year postdoctoralfellowshipfor studies inpureandapplied mathematicsatStanford University(1964–1967). In 1967, he joined thefacultyof University of Pittsburgh; during 1972–1973, he was visiting professor at the University of York in England.[1] Fararo is listed inAmerican Men and Women of Science,Who's Who in America, andWho's Who in Frontier Science and Technology. In 1998, he received the Distinguished Career Award from the Mathematical Sociology section of theAmerican Sociological Association. In addition to over a dozen books, Fararo has published over two dozen book chapters, over one dozen articles in reference works, and over 50 journal articles. Some of his books are edited works that relate to his career-long interest in making mathematical ideas relevant to the development of sociological theory. Fararo has served on the editorial boards of theAmerican Journal of Sociology, theAmerican Sociological Review, theJournal of Mathematical Sociology,Social Networks,Sociological Forum, andSociological Theory. Fararo has been both an originator and an explicator of ideas and methods relating to the use of formal methods in sociological theory. In his original work, he has employed theories and methods relating to social networks in combination with a focus on social processes. This combination is illustrated by the theoretical method he has called E-state Structuralism (where E stands for Expectations) with work on this done with former student John Skvoretz. He often employed the axiomatic method in such work, as in the 2003 monograph with his student Kenji Kosaka that sets out a formal theory of how images of stratification are generated. In his expository work, he has attempted to move the field of sociology closer to a conception of theorizing that is more formal, as in his 1973 bookMathematical Sociologyand in various papers and edited books, including the 1984 volumeMathematical Ideas and Sociological Theory. One of his objectives has been to articulate a coherent vision of the core of sociological theory: its philosophy, its key theoretical problems, and its methods, especially those employing formal representation. This objective is represented in his 1989 book,The Meaning of General Theoretical Sociology: Tradition and Formalization. The general vision that informs Fararo's theoretical work is "the spirit of unification," a theme that is set out inSocial Action Systems: Foundation and Synthesis in Sociological Theory, a 2001 book that analyzes key theories from the standpoint of the aspiration of synthesis, moving toward more comprehensive theories of social life.
https://en.wikipedia.org/wiki/Thomas_Fararo
In thefield of statistics,biasis a systematic tendency in which the methods used to gatherdataandestimateasample statisticpresent an inaccurate, skewed or distorted (biased) depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, theestimatorchosen, and the methods used to analyze the data. Data analystscan take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues ofstatistical validity.[1] Statistical bias can have significant real world implications as data is used to inform decision making across a wide variety of processes in society. Data is used to inform lawmaking, industry regulation, corporate marketing and distribution tactics, and institutional policies in organizations and workplaces. Therefore, there can be significant implications if statistical bias is not accounted for and controlled. For example, if a pharmaceutical company wishes to explore the effect of a medication on the common cold but the datasampleonly includes men, any conclusions made from that data will be biased towards how the medication affects men rather than people in general. That means the information would be incomplete and not useful for deciding if the medication is ready for release in the general public. In this scenario, the bias can be addressed by broadening the sample. Thissampling erroris only one of the ways in which data can be biased. Bias can be differentiated from other statistical mistakes such asaccuracy(instrument failure/inadequacy), lack of data, or mistakes in transcription (typos). Bias implies that the data selection may have been skewed by the collection criteria. Other forms of human-based bias emerge in data collection as well such asresponse bias, in which participants give inaccurate responses to a question. Bias does not preclude the existence of any other mistakes. One may have a poorly designed sample, an inaccurate measurement device, and typos in recording data simultaneously. Ideally, all factors are controlled and accounted for. Also it is useful to recognize that the term “error” specifically refers to the outcome rather than the process (errors of rejection or acceptance of the hypothesis being tested), or from the phenomenon ofrandom errors.[2]The termsflawormistakeare recommended to differentiate procedural errors from these specifically defined outcome-based terms. Statistical bias is a feature of astatisticaltechnique or of its results whereby theexpected valueof the results differs from the true underlying quantitativeparameterbeingestimated. The bias of an estimator of a parameter should not be confused with its degree of precision, as the degree of precision is a measure of the sampling error. The bias is defined as follows: letT{\displaystyle T}be a statistic used to estimate a parameterθ{\displaystyle \theta }, and letE⁡(T){\displaystyle \operatorname {E} (T)}denote the expected value ofT{\displaystyle T}. Then, is called the bias of the statisticT{\displaystyle T}(with respect toθ{\displaystyle \theta }). Ifbias⁡(T,θ)=0{\displaystyle \operatorname {bias} (T,\theta )=0}, thenT{\displaystyle T}is said to be anunbiased estimatorofθ{\displaystyle \theta }; otherwise, it is said to be abiased estimatorofθ{\displaystyle \theta }. The bias of a statisticT{\displaystyle T}is always relative to the parameterθ{\displaystyle \theta }it is used to estimate, but the parameterθ{\displaystyle \theta }is often omitted when it is clear from the context what is being estimated. Statistical bias comes from all stages of data analysis. The following sources of bias will be listed in each stage separately. Selection biasinvolves individuals being more likely to be selected for study than others,biasing the sample. This can also be termed selection effect,sampling biasandBerksonian bias.[3] Type I and type II errorsinstatistical hypothesis testingleads to wrong results.[12]Type I error happens when the null hypothesis is correct but is rejected. For instance, suppose that the null hypothesis is that if the average driving speed limit ranges from 75 to 85 km/h, it is not considered as speeding. On the other hand, if the average speed is not in that range, it is considered speeding. If someone receives a ticket with an average driving speed of 7 km/h, the decision maker has committed a Type I error. In other words, the average driving speed meets the null hypothesis but is rejected. On the contrary, Type II error happens when the null hypothesis is not correct but is accepted. Bias in hypothesis testing occurs when the power (the complement of the type II error rate) at some alternative is lower than the supremum of the Type I error rate (which is usually the significance level,α{\displaystyle \alpha }). Equivalently, if no rejection rate at any alternative is lower than the rejection rate at any point in the null hypothesis set, the test is said to be unbiased.[13] Thebias of an estimatoris the difference between an estimator's expected value and the true value of the parameter being estimated. Although an unbiased estimator is theoretically preferable to a biased estimator, in practice, biased estimators with small biases are frequently used. A biased estimator may be more useful for several reasons. First, an unbiased estimator may not exist without further assumptions. Second, sometimes an unbiased estimator is hard to compute. Third, a biased estimator may have a lower value of mean squared error. Reporting biasinvolves a skew in the availability of data, such that observations of a certain kind are more likely to be reported. Depending on the type of bias present, researchers and analysts can take different steps to reduce bias on a data set. All types of bias mentioned above have corresponding measures which can be taken to reduce or eliminate their impacts. Bias should be accounted for at every step of the data collection process, beginning with clearly defined research parameters and consideration of the team who will be conducting the research.[2]Observer biasmay be reduced by implementing ablindordouble-blindtechnique. Avoidance ofp-hackingis essential to the process of accurate data collection. One way to check for bias in results after is rerunning analyses with different independent variables to observe whether a given phenomenon still occurs in dependent variables.[17]Careful use of language in reporting can reduce misleading phrases, such as discussion of a result "approaching" statistical significant as compared to actually achieving it.[2]
https://en.wikipedia.org/wiki/Bias_(statistics)
Language model benchmarksare standardized tests designed to evaluate the performance oflanguage modelson variousnatural language processingtasks. These tests are intended for comparing different models' capabilities in areas such aslanguage understanding,generation, andreasoning. Benchmarks generally consist of adatasetand correspondingevaluation metrics. The dataset provides text samples and annotations, while the metrics measure a model's performance on tasks like question answering, text classification, and machine translation. These benchmarks are developed and maintained by academic institutions, research organizations, and industry players to track progress in the field. Benchmarks may be described by the following adjectives, not mutually exclusive: The boundary between a benchmark and a dataset is not sharp. Generally, a dataset contains three "splits":training, test, validation. Both the test and validation splits are essentially benchmarks. In general, a benchmark is distinguished from a test/validation dataset in that a benchmark is typically intended to be used to measure the performance of many different models that are not trained specifically for doing well on the benchmark, while a test/validation set is intended to be used to measure the performance of models trained specifically on the corresponding training set. In other words, a benchmark may be thought of as a test/validation set without a corresponding training set. Conversely, certain benchmarks may be used as a training set, such as the English Gigaword[4]or the One Billion Word Benchmark, which in modern language is just the negative log likelihood loss on a pretraining set with 1 billion words.[5]Indeed, the distinction between benchmark and dataset in language models became sharper after the rise of thepretrainingparadigm. Generally, the life cycle of a benchmark consists of the following steps:[6] Like datasets, benchmarks are typically constructed by several methods, individually or in combination: Generally, benchmarks are fully automated. This limits the questions that can be asked. For example, with mathematical questions, "proving a claim" would be difficult to automatically check, while "calculate an answer with a unique integer answer" would be automatically checkable. With programming tasks, the answer can generally be checked by running unit tests, with an upper limit on runtime. The benchmark scores are of the following kinds: The pass@n score can be estimated more accurately by makingN>n{\displaystyle N>n}attempts, and use the unbiased estimator1−(N−cn)(Nn){\displaystyle 1-{\frac {\binom {N-c}{n}}{\binom {N}{n}}}}, wherec{\displaystyle c}is the number of correct attempts.[8] For less well-formed tasks, where the output can be any sentence, there are the following commonly used scores:BLEUROUGE,METEOR,NIST,word error rate,LEPOR, CIDEr,[9]SPICE,[10]etc. Essentially any dataset can be used as a benchmark forstatistical language modeling, with theperplexity(or near-equivalently, negativelog-likelihoodand bits per character, as in the originalShannon's test of the entropy of the English language[19]) being used as the benchmark score. For example, the originalGPT-2announcement included those of the model on WikiText-2, enwik8, text8, and WikiText-103 (all being standard language datasets made from the English Wikipedia).[3][20] However, there had been datasets more commonly used, or specifically designed, for use as a benchmark. See[22]for a review of over 100 such benchmarks. Some benchmarks are "omnibus", meaning they are made by combining several previous benchmarks. Some benchmarks were designed specifically to test for processing continuous text that is very long.
https://en.wikipedia.org/wiki/Language_model_benchmark
SHA-3(Secure Hash Algorithm 3) is the latest[4]member of theSecure Hash Algorithmfamily of standards, released byNISTon August 5, 2015.[5][6][7]Although part of the same series of standards, SHA-3 is internally different from theMD5-likestructureofSHA-1andSHA-2. SHA-3 is a subset of the broader cryptographic primitive familyKeccak(/ˈkɛtʃæk/or/ˈkɛtʃɑːk/),[8][9]designed byGuido Bertoni,Joan Daemen,Michaël Peeters, andGilles Van Assche, building uponRadioGatún. Keccak's authors have proposed additional uses for the function, not (yet) standardized by NIST, including astream cipher, anauthenticated encryptionsystem, a "tree" hashing scheme for faster hashing on certain architectures,[10][11]andAEADciphers Keyak and Ketje.[12][13] Keccak is based on a novel approach calledsponge construction.[14]Sponge construction is based on a wide random function or randompermutation, and allows inputting ("absorbing" in sponge terminology) any amount of data, and outputting ("squeezing") any amount of data, while acting as a pseudorandom function with regard to all previous inputs. This leads to great flexibility. As of 2022, NIST does not plan to withdraw SHA-2 or remove it from the revised Secure Hash Standard.[15]The purpose of SHA-3 is that it can be directly substituted for SHA-2 in current applications if necessary, and to significantly improve the robustness of NIST's overall hash algorithm toolkit.[16] For small message sizes, the creators of the Keccak algorithms and the SHA-3 functions suggest using the faster functionKangarooTwelvewith adjusted parameters and a new tree hashing mode without extra overhead. The Keccak algorithm is the work of Guido Bertoni,Joan Daemen(who also co-designed theRijndaelcipher withVincent Rijmen), Michaël Peeters, andGilles Van Assche. It is based on earlier hash function designsPANAMAandRadioGatún. PANAMA was designed by Daemen and Craig Clapp in 1998. RadioGatún, a successor of PANAMA, was designed by Daemen, Peeters, and Van Assche, and was presented at the NIST Hash Workshop in 2006.[17]Thereference implementationsource codewas dedicated topublic domainviaCC0waiver.[18] In 2006,NISTstarted to organize theNIST hash function competitionto create a new hash standard, SHA-3. SHA-3 is not meant to replaceSHA-2, as no significant attack on SHA-2 has been publicly demonstrated[needs update]. Because of the successful attacks onMD5,SHA-0andSHA-1,[19][20]NIST perceived a need for an alternative, dissimilar cryptographic hash, which became SHA-3. After a setup period, admissions were to be submitted by the end of 2008. Keccak was accepted as one of the 51 candidates. In July 2009, 14 algorithms were selected for the second round. Keccak advanced to the last round in December 2010.[21] During the competition, entrants were permitted to "tweak" their algorithms to address issues that were discovered. Changes that have been made to Keccak are:[22][23] On October 2, 2012, Keccak was selected as the winner of the competition.[8] In 2014, the NIST published a draftFIPS202 "SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions".[24]FIPS 202 was approved on August 5, 2015.[25] On August 5, 2015, NIST announced that SHA-3 had become a hashing standard.[26] In early 2013 NIST announced they would select different values for the "capacity", the overall strength vs. speed parameter, for the SHA-3 standard, compared to the submission.[27][28]The changes caused some turmoil. The hash function competition called for hash functions at least as secure as the SHA-2 instances. It means that ad-bit output should haved/2-bit resistance tocollision attacksandd-bit resistance topreimage attacks, the maximum achievable fordbits of output. Keccak's security proof allows an adjustable level of security based on a "capacity"c, providingc/2-bit resistance to both collision and preimage attacks. To meet the original competition rules, Keccak's authors proposedc= 2d. The announced change was to accept the samed/2-bit security for all forms of attack and standardizec=d. This would have sped up Keccak by allowing an additionaldbits of input to be hashed each iteration. However, the hash functions would not have been drop-in replacements with the same preimage resistance as SHA-2 any more; it would have been cut in half, making it vulnerable to advances in quantum computing, which effectively would cut it in half once more.[29] In September 2013,Daniel J. Bernsteinsuggested on theNISThash-forum mailing list[30]to strengthen the security to the 576-bit capacity that was originally proposed as the default Keccak, in addition to and not included in the SHA-3 specifications.[31]This would have provided at least a SHA3-224 and SHA3-256 with the same preimage resistance as their SHA-2 predecessors, but SHA3-384 and SHA3-512 would have had significantly less preimage resistance than their SHA-2 predecessors. In late September, the Keccak team responded by stating that they had proposed 128-bit security by settingc= 256as an option already in their SHA-3 proposal.[32]Although the reduced capacity was justifiable in their opinion, in the light of the negative response, they proposed raising the capacity toc= 512bits for all instances. This would be as much as any previous standard up to the 256-bit security level, while providing reasonable efficiency,[33]but not the 384-/512-bit preimage resistance offered by SHA2-384 and SHA2-512. The authors stated that "claiming or relying onsecurity strengthlevels above 256 bits is meaningless". In early October 2013,Bruce Schneiercriticized NIST's decision on the basis of its possible detrimental effects on the acceptance of the algorithm, saying: There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.[34] He later retracted his earlier statement, saying: I misspoke when I wrote that NIST made "internal changes" to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function's capacity in the name of performance. One of Keccak's nice features is that it's highly tunable.[34] Paul Crowley, a cryptographer and senior developer at an independent software development company, expressed his support of the decision, saying that Keccak is supposed to be tunable and there is no reason for different security levels within one primitive. He also added: Yes, it's a bit of a shame for the competition that they demanded a certain security level for entrants, then went to publish a standard with a different one. But there's nothing that can be done to fix that now, except re-opening the competition. Demanding that they stick to their mistake doesn't improve things for anyone.[35] There was some confusion that internal changes may have been made to Keccak, which were cleared up by the original team, stating that NIST's proposal for SHA-3 is a subset of the Keccak family, for which one can generate test vectors using their reference code submitted to the contest, and that this proposal was the result of a series of discussions between them and the NIST hash team.[36] In response to the controversy, in November 2013 John Kelsey of NIST proposed to go back to the originalc= 2dproposal for all SHA-2 drop-in replacement instances.[37]The reversion was confirmed in subsequent drafts[38]and in the final release.[5] SHA-3 uses thesponge construction,[14]in which data is "absorbed" into the sponge, then the result is "squeezed" out. In the absorbing phase, message blocks areXORedinto a subset of the state, which is then transformed as a whole using apermutation function(ortransformation)f{\displaystyle f}. In the "squeeze" phase, output blocks are read from the same subset of the state, alternated with the state transformation functionf{\displaystyle f}. The size of the part of the state that is written and read is called the "rate" (denotedr{\displaystyle r}), and the size of the part that is untouched by input/output is called the "capacity" (denotedc{\displaystyle c}). The capacity determines the security of the scheme. The maximumsecurity levelis half the capacity. Given an input bit stringN{\displaystyle N}, a padding functionpad{\displaystyle pad}, a permutation functionf{\displaystyle f}that operates on bit blocks of widthb{\displaystyle b}, a rater{\displaystyle r}and an output lengthd{\displaystyle d}, we have capacityc=b−r{\displaystyle c=b-r}and the sponge constructionZ=sponge[f,pad,r](N,d){\displaystyle Z={\text{sponge}}[f,pad,r](N,d)}, yielding a bit stringZ{\displaystyle Z}of lengthd{\displaystyle d}, works as follows:[6]: 18 The fact that the internal stateScontainscadditional bits of information in addition to what is output toZprevents thelength extension attacksthat SHA-2, SHA-1, MD5 and other hashes based on theMerkle–Damgård constructionare susceptible to. In SHA-3, the stateSconsists of a5 × 5array ofw-bit words (withw= 64),b= 5 × 5 ×w= 5 × 5 × 64 = 1600 bits total. Keccak is also defined for smaller power-of-2 word sizeswdown to 1 bit (total state of 25 bits). Small state sizes can be used to test cryptanalytic attacks, and intermediate state sizes (fromw= 8, 200 bits, tow= 32, 800 bits) can be used in practical, lightweight applications.[12][13] For SHA3-224, SHA3-256, SHA3-384, and SHA3-512 instances,ris greater thand, so there is no need for additional block permutations in the squeezing phase; the leadingdbits of the state are the desired hash. However, SHAKE128 and SHAKE256 allow an arbitrary output length, which is useful in applications such asoptimal asymmetric encryption padding. To ensure the message can be evenly divided intor-bit blocks, padding is required. SHA-3 uses the pattern 10...01 in its padding function: a 1 bit, followed by zero or more 0 bits (maximumr− 1) and a final 1 bit. The maximum ofr− 1zero bits occurs when the last message block isr− 1bits long. Then another block is added after the initial 1 bit, containingr− 1zero bits before the final 1 bit. The two 1 bits will be added even if the length of the message is already divisible byr.[6]: 5.1In this case, another block is added to the message, containing a 1 bit, followed by a block ofr− 2zero bits and another 1 bit. This is necessary so that a message with length divisible byrending in something that looks like padding does not produce the same hash as the message with those bits removed. The initial 1 bit is required so messages differing only in a few additional 0 bits at the end do not produce the same hash. The position of the final 1 bit indicates which raterwas used (multi-rate padding), which is required for the security proof to work for different hash variants. Without it, different hash variants of the same short message would be the same up to truncation. The block transformationf, which is Keccak-f[1600] for SHA-3, is a permutation that usesXOR,ANDandNOToperations, and is designed for easy implementation in both software and hardware. It is defined for any power-of-twowordsize,w= 2ℓbits. The main SHA-3 submission uses 64-bit words,ℓ= 6. The state can be considered to be a5 × 5 ×warray of bits. Leta[i][j][k]be bit(5i+j) ×w+kof the input, using alittle-endianbit numbering convention androw-majorindexing. I.e.iselects the row,jthe column, andkthe bit. Index arithmetic is performed modulo 5 for the first two dimensions and modulowfor the third. The basic block permutation function consists of12 + 2ℓrounds of five steps: The speed of SHA-3 hashing of long messages is dominated by the computation off= Keccak-f[1600] and XORingSwith the extendedPi, an operation onb= 1600 bits. However, since the lastcbits of the extendedPiare 0 anyway, and XOR with 0 is a NOP, it is sufficient to perform XOR operations only forrbits (r= 1600 − 2 × 224 = 1152 bits for SHA3-224, 1088 bits for SHA3-256, 832 bits for SHA3-384 and 576 bits for SHA3-512). The lowerris (and, conversely, the higherc=b−r= 1600 −r), the less efficient but more secure the hashing becomes since fewer bits of the message can be XORed into the state (a quick operation) before each application of the computationally expensivef. The authors report the following speeds for software implementations of Keccak-f[1600] plus XORing 1024 bits,[1]which roughly corresponds to SHA3-256: For the exact SHA3-256 on x86-64, Bernstein measures 11.7–12.25 cpb depending on the CPU.[40]: 7SHA-3 has been criticized for being slow on instruction set architectures (CPUs) which do not have instructions meant specially for computing Keccak functions faster – SHA2-512 is more than twice as fast as SHA3-512, and SHA-1 is more than three times as fast on an Intel Skylake processor clocked at 3.2 GHz.[41]The authors have reacted to this criticism by suggesting to use SHAKE128 and SHAKE256 instead of SHA3-256 and SHA3-512, at the expense of cutting the preimage resistance in half (but while keeping the collision resistance). With this, performance is on par with SHA2-256 and SHA2-512. However, inhardware implementations, SHA-3 is notably faster than all other finalists,[42]and also faster than SHA-2 and SHA-1.[41] As of 2018, ARM's ARMv8[43]architecture includes special instructions which enable Keccak algorithms to execute faster and IBM'sz/Architecture[44]includes a complete implementation of SHA-3 and SHAKE in a single instruction. There have also been extension proposals forRISC-Vto add Keccak-specific instructions.[45] The NIST standard defines the following instances, for messageMand output lengthd:[6]: 20, 23 With the following definitions SHA-3 instances are drop-in replacements for SHA-2, intended to have identical security properties. SHAKE will generate as many bits from its sponge as requested, thus beingextendable-output functions(XOFs). For example, SHAKE128(M, 256) can be used as a hash function with a 256 character bitstream with 128-bit security strength. Arbitrarily large lengths can be used as pseudo-random number generators. Alternately, SHAKE256(M, 128) can be used as a hash function with a 128-bit length and 128-bit resistance.[6] All instances append some bits to the message, the rightmost of which represent thedomain separationsuffix. The purpose of this is to ensure that it is not possible to construct messages that produce the same hash output for different applications of the Keccak hash function. The following domain separation suffixes exist:[6][46] In December 2016NISTpublished a new document, NIST SP.800-185,[47]describing additional SHA-3-derived functions: • X is the main input bit string. It may be of any length, including zero. • L is an integer representing the requested output length in bits. • N is a function-name bit string, used by NIST to define functions based on cSHAKE. When no function other than cSHAKE is desired, N is set to the empty string. • S is a customization bit string. The user selects this string to define a variant of the function. When no customization is desired, S is set to the empty string. • K is a key bit string of any length, including zero. • B is the block size in bytes for parallel hashing. It may be any integer such that 0 < B < 22040. In 2016 the same team that made the SHA-3 functions and the Keccak algorithm introduced faster reduced-rounds (reduced to 12 and 14 rounds, from the 24 in SHA-3) alternatives which can exploit the availability of parallel execution because of usingtree hashing: KangarooTwelve and MarsupilamiFourteen.[49] These functions differ from ParallelHash, the FIPS standardized Keccak-based parallelizable hash function, with regard to the parallelism, in that they are faster than ParallelHash for small message sizes. The reduced number of rounds is justified by the huge cryptanalytic effort focused on Keccak which did not produce practical attacks on anything close to twelve-round Keccak. These higher-speed algorithms are not part of SHA-3 (as they are a later development), and thus are not FIPS compliant; but because they use the same Keccak permutation they are secure for as long as there are no attacks on SHA-3 reduced to 12 rounds.[49] KangarooTwelve is a higher-performance reduced-round (from 24 to 12 rounds) version of Keccak which claims to have 128 bits of security[50]while having performance as high as 0.55cycles per byteon aSkylakeCPU.[51]This algorithm is anIETFRFCdraft.[52] MarsupilamiFourteen, a slight variation on KangarooTwelve, uses 14 rounds of the Keccak permutation and claims 256 bits of security. Note that 256-bit security is not more useful in practice than 128-bit security, but may be required by some standards.[50]128 bits are already sufficient to defeat brute-force attacks on current hardware, so having 256-bit security does not add practical value, unless the user is worried about significant advancements in the speed ofclassicalcomputers. For resistance againstquantumcomputers, see below. KangarooTwelve and MarsupilamiFourteen are Extendable-Output Functions, similar to SHAKE, therefore they generate closely related output for a common message with different output length (the longer output is an extension of the shorter output). Such property is not exhibited by hash functions such as SHA-3 or ParallelHash (except of XOF variants).[6] In 2016, the Keccak team released a different construction calledFarfalle construction, and Kravatte, an instance of Farfalle using the Keccak-p permutation,[53]as well as two authenticated encryption algorithms Kravatte-SANE and Kravatte-SANSE[54] RawSHAKE is the basis for the Sakura coding for tree hashing, which has not been standardized yet. Sakura uses a suffix of 1111 for single nodes, equivalent to SHAKE, and other generated suffixes depending on the shape of the tree.[46]: 16 There is a general result (Grover's algorithm) that quantum computers can perform a structuredpreimage attackin2d=2d/2{\displaystyle {\sqrt {2^{d}}}=2^{d/2}}, while a classical brute-force attack needs 2d. A structured preimage attack implies a second preimage attack[29]and thus acollision attack. A quantum computer can also perform abirthday attack, thus break collision resistance, in2d3=2d/3{\displaystyle {\sqrt[{3}]{2^{d}}}=2^{d/3}}[55](although that is disputed).[56]Noting that the maximum strength can bec/2{\displaystyle c/2}, this gives the following upper[57]bounds on the quantum security of SHA-3: It has been shown that theMerkle–Damgård construction, as used by SHA-2, is collapsing and, by consequence, quantum collision-resistant,[58]but for the sponge construction used by SHA-3, the authors provide proofs only for the case when the block functionfis not efficiently invertible; Keccak-f[1600], however, is efficiently invertible, and so their proof does not apply.[59][original research] The following hash values are from NIST.gov:[60] Changing a single bit causes each bit in the output to change with 50% probability, demonstrating anavalanche effect: In the table below,internal statemeans the number of bits that are carried over to the next block. Optimized implementation usingAVX-512VL(i.e. fromOpenSSL, running onSkylake-XCPUs) of SHA3-256 do achieve about 6.4 cycles per byte for large messages,[66]and about 7.8 cycles per byte when usingAVX2onSkylakeCPUs.[67]Performance on other x86, Power and ARM CPUs depending on instructions used, and exact CPU model varies from about 8 to 15 cycles per byte,[68][69][70]with some older x86 CPUs up to 25–40 cycles per byte.[71] Below is a list of cryptography libraries that support SHA-3: Apple A13ARMv8 six-coreSoCCPU cores have support[72]for accelerating SHA-3 (and SHA-512) using specialized instructions (EOR3, RAX1, XAR, BCAX) from ARMv8.2-SHA crypto extension set.[73] Some software libraries usevectorizationfacilities of CPUs to accelerate usage of SHA-3. For example, Crypto++ can useSSE2on x86 for accelerating SHA3,[74]andOpenSSLcan useMMX,AVX-512orAVX-512VLon many x86 systems too.[75]AlsoPOWER8CPUs implement 2x64-bit vector rotate, defined in PowerISA 2.07, which can accelerate SHA-3 implementations.[76]Most implementations for ARM do not useNeonvector instructions asscalar codeis faster. ARM implementations can however be accelerated usingSVEand SVE2 vector instructions; these are available in theFujitsu A64FXCPU for instance.[77] The IBMz/Architecturesupports SHA-3 since 2017 as part of the Message-Security-Assist Extension 6.[78]The processors support a complete implementation of the entire SHA-3 and SHAKE algorithms via the KIMD and KLMD instructions using a hardware assist engine built into each core. Ethereumuses the Keccak-256 hash function (as per version 3 of the winning entry to the SHA-3 contest by Bertoni et al., which is different from the final SHA-3 specification).[79]
https://en.wikipedia.org/wiki/SHA-3
Algorithmic radicalizationis the concept thatrecommender algorithmson popular social media sites such asYouTubeandFacebookdrive users toward progressively more extreme content over time, leading to them developingradicalizedextremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Throughecho chamberchannels, the consumer is driven to be morepolarizedthrough preferences in media and self-confirmation.[1][2][3][4][5] Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels.[6][7]To what extent recommender algorithms are actually responsible for radicalization remains disputed; studies have found contradictory results as to whether algorithms have promoted extremist content. Social media platforms learn the interests and likes of the user to modify their experiences in their feed to keep them engaged and scrolling, known as afilter bubble.[8]An echo chamber is formed when users come across beliefs that magnify or reinforce their thoughts and form a group of like-minded users in a closed system.[9]Echo chambers spread information without any opposing beliefs and can possibly lead toconfirmation bias. According togroup polarizationtheory, an echo chamber can potentially lead users and groups towards more extreme radicalized positions.[10]According to the National Library of Medicine, "Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives. Furthermore, when polarization is high, misinformation quickly proliferates."[11] On May 14, 2022, 18-year oldPayton S. Gendroncarried out amass-shootinginBuffalo, New York. The shooter stated in hismanifestothat the internet was the source of his radical beliefs: "There was little to no influence on my personal beliefs by people I met in person."[12] Around March 19, 2024, a New York state judge ruledRedditandYouTubemust face lawsuits in connection with the mass shooting over accusations that they played a role in the radicalization of the shooter.[13] Facebook's algorithm focuses on recommending content that makes the user want to interact. They rank content by prioritizing popular posts by friends, viral content, and sometimes divisive content. Each feed is personalized to the user's specific interests which can sometimes lead users towards an echo chamber of troublesome content.[14]Users can find their list of interests the algorithm uses by going to the "Your ad Preferences" page. According to a Pew Research study, 74% of Facebook users did not know that list existed until they were directed towards that page in the study.[15]It is also relatively common for Facebook to assign political labels to their users. In recent years,[when?]Facebook has started using artificial intelligence to change the content users see in their feed and what is recommended to them. A document known asThe Facebook Fileshas revealed that their AI system prioritizesuser engagementover everything else. The Facebook Files has also demonstrated that controlling the AI systems has proven difficult to handle.[16] In an August 2019 internal memo leaked in 2021, Facebook has admitted that "the mechanics of our platforms are not neutral",[17][18]concluding that in order to reach maximum profits, optimization for engagement is necessary. In order to increase engagement, algorithms have found that hate, misinformation, and politics are instrumental for app activity.[19]As referenced in the memo, "The more incendiary the material, the more it keeps users engaged, the more it is boosted by the algorithm."[17]According to a 2018 study, "false rumors spread faster and wider than true information... They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories."[20] YouTubehas been around since 2005 and has more than 2.5 billion monthly users. YouTube discovery content systems focus on the user's personal activity (watched, favorites, likes) to direct them to recommended content. YouTube's algorithm is accountable for roughly 70% of users' recommended videos and what drives people to watch certain content.[21]According to a 2022 study by theMozilla Foundation, users have little power to keep unsolicited videos out of their suggested recommended content. This includes videos about hate speech, livestreams, etc.[22][21] YouTube has been identified as an influential platform for spreading radicalized content.Al-Qaedaand similarextremist groupshave been linked to using YouTube for recruitment videos and engaging with international media outlets. In a research study published by theAmerican Behavioral Scientist Journal, they researched "whether it is possible to identify a set of attributes that may help explain part of the YouTube algorithm's decision-making process".[23]The results of the study showed that YouTube's algorithm recommendations for extremism content factor into the presence of radical keywords in a video's title. In February 2023, in the case ofGonzalez v. Google, the question at hand is whether or not Google, the parent company of YouTube, is protected from lawsuits claiming that the site's algorithms aided terrorists in recommendingISISvideos to users.Section 230is known to generally protect online platforms from civil liability for the content posted by its users.[24] Multiple studies have found little to no evidence to suggest that YouTube's algorithms direct attention towards far-right content to those not already engaged with it.[25][26][27] TikTokis an app that recommends videos to a user's 'For You Page' (FYP), making every users' page different. With the nature of the algorithm behind the app, TikTok's FYP has been linked to showing more explicit and radical videos over time based on users' previous interactions on the app.[28]Since TikTok's inception, the app has been scrutinized for misinformation and hate speech as those forms of media usually generate more interactions to the algorithm.[29] Various extremist groups, includingjihadistorganizations, have utilized TikTok to disseminate propaganda, recruit followers, and incite violence. The platform's algorithm, which recommends content based on user engagement, can expose users to extremist content that aligns with their interests or interactions.[30] As of 2022, TikTok's head of US Security has put out a statement that "81,518,334 videos were removed globally between April – June for violating our Community Guidelines or Terms of Service" to cut back on hate speech, harassment, and misinformation.[31] Studies have noted instances where individuals were radicalized through content encountered on TikTok. For example, in early 2023, Austrian authorities thwarted a plot against an LGBTQ+pride paradethat involved two teenagers and a 20-year-old who were inspired by jihadist content on TikTok. The youngest suspect, 14 years old, had been exposed to videos created by Islamist influencers glorifying jihad. These videos led him to further engagement with similar content, eventually resulting in his involvement in planning an attack.[30] Another case involved the arrest of several teenagers inVienna, Austria, in 2024, who were planning to carry out a terrorist attack at aTaylor Swiftconcert. The investigation revealed that some of the suspects had been radicalized online, with TikTok being one of the platforms used to disseminate extremist content that influenced their beliefs and actions.[30] The U.S. Department of Justice defines 'Lone-wolf' (self) terrorism as "someone who acts alone in a terrorist attack without the help or encouragement of a government or a terrorist organization".[32]Through social media outlets on the internet, 'Lone-wolf' terrorism has been on the rise, being linked to algorithmic radicalization.[33]Through echo-chambers on the internet, viewpoints typically seen as radical were accepted and quickly adopted by other extremists.[34]These viewpoints are encouraged by forums, group chats, and social media to reinforce their beliefs.[35] The Social Dilemmais a 2020 docudrama about how algorithms behind social media enables addiction, while possessing abilities to manipulate people's views, emotions, and behavior to spread conspiracy theories and disinformation. The film repeatedly uses buzz words such as 'echo chambers' and 'fake news' to provepsychological manipulationon social media, therefore leading topolitical manipulation. In the film, Ben falls deeper into asocial media addictionas the algorithm found that his social media page has a 62.3% chance of long-term engagement. This leads into more videos on the recommended feed for Ben and he eventually becomes more immersed into propaganda and conspiracy theories, becoming more polarized with each video. In theCommunications Decency Act,Section 230states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".[36]Section 230 protects the media from liabilities or being sued of third-party content, such as illegal activity from a user.[36]However, critics argue that this approach reduces a company's incentive to remove harmful content or misinformation, and this loophole has allowed social media companies to maximize profits through pushing radical content without legal risks.[37]This claim has itself been criticized by proponents of Section 230, as prior to its passing, courts had ruled inStratton Oakmont, Inc. v. Prodigy Services Co.that moderation in any capacity introduces a liability to content providers as "publishers" of the content they chose to leave up.[38] Lawmakers have drafted legislation that would weaken or remove Section 230 protections over algorithmic content.House DemocratsAnna Eshoo,Frank Pallone Jr.,Mike Doyle, andJan Schakowskyintroduced the "Justice Against Malicious Algorithms Act" in October 2021 asH.R. 5596. The bill died in committee,[39]but it would have removed Section 230 protections for service providers related to personalizedrecommendation algorithmsthat present content to users if those algorithms knowingly or recklessly deliver content that contributes to physical or severe emotional injury.[40]
https://en.wikipedia.org/wiki/Algorithmic_radicalization
Inidempotent analysis, thetropical semiringis asemiringofextended real numberswith the operations ofminimum(ormaximum) and addition replacing the usual ("classical") operations of addition and multiplication, respectively. The tropical semiring has various applications (seetropical analysis), and forms the basis oftropical geometry. The nametropicalis a reference to the Hungarian-born computer scientistImre Simon, so named because he lived and worked in Brazil.[1] Themin tropical semiring(ormin-plus semiringormin-plus algebra) is thesemiring(R∪{+∞}{\displaystyle \mathbb {R} \cup \{+\infty \}},⊕{\displaystyle \oplus },⊗{\displaystyle \otimes }), with the operations: The operations⊕{\displaystyle \oplus }and⊗{\displaystyle \otimes }are referred to astropical additionandtropical multiplicationrespectively. The identity element for⊕{\displaystyle \oplus }is+∞{\displaystyle +\infty }, and the identity element for⊗{\displaystyle \otimes }is 0. Similarly, themax tropical semiring(ormax-plus semiringormax-plus algebraorArctic semiring[citation needed]) is the semiring (R∪{−∞}{\displaystyle \mathbb {R} \cup \{-\infty \}},⊕{\displaystyle \oplus },⊗{\displaystyle \otimes }), with operations: The identity element unit for⊕{\displaystyle \oplus }is−∞{\displaystyle -\infty }, and the identity element unit for⊗{\displaystyle \otimes }is 0. The two semirings are isomorphic under negationx↦−x{\displaystyle x\mapsto -x}, and generally one of these is chosen and referred to simply as thetropical semiring. Conventions differ between authors and subfields: some use theminconvention, some use themaxconvention. The two tropical semirings are the limit ("tropicalization", "dequantization") of thelog semiringas the base goes to infinity⁠b→∞{\displaystyle b\to \infty }⁠(max-plus semiring) or to zero⁠b→0{\displaystyle b\to 0}⁠(min-plus semiring). Tropical addition isidempotent, thus a tropical semiring is an example of anidempotent semiring. A tropical semiring is also referred to as atropical algebra,[2]though this should not be confused with anassociative algebraover a tropical semiring. Tropicalexponentiationis defined in the usual way as iterated tropical products. The tropical semiring operations model howvaluationsbehave under addition and multiplication in avalued field. A real-valued fieldK{\displaystyle K}is a field equipped with a function which satisfies the following properties for alla{\displaystyle a},b{\displaystyle b}inK{\displaystyle K}: Therefore the valuationvis almost a semiring homomorphism fromKto the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added together. Some common valued fields:
https://en.wikipedia.org/wiki/Tropical_arithmetic
Inmathematics, apointxis called anisolated pointof a subsetS(in atopological spaceX) ifxis an element ofSand there exists aneighborhoodofxthat does not contain any other points ofS. This is equivalent to saying that thesingleton{x}is anopen setin the topological spaceS(considered as asubspaceofX). Another equivalent formulation is: an elementxofSis an isolated point ofSif and only if it is not alimit pointofS. If the spaceXis ametric space, for example aEuclidean space, then an elementxofSis an isolated point ofSif there exists anopen ballaroundxthat contains only finitely many elements ofS. Apoint setthat is made up only of isolated points is called adiscrete setordiscrete point set(see alsodiscrete space). Any discrete subsetSof Euclidean space must becountable, since the isolation of each of its points together with the fact thatrationalsaredensein therealsmeans that the points ofSmay be mapped injectively onto a set of points with rational coordinates, of which there are only countably many. However, not every countable set is discrete, of which the rational numbers under the usual Euclidean metric are the canonical example. A set with no isolated point is said to bedense-in-itself(every neighbourhood of a point contains other points of the set). Aclosed setwith no isolated point is called aperfect set(it contains all its limit points and no isolated points). The number of isolated points is atopological invariant, i.e. if twotopological spacesX, Yarehomeomorphic, the number of isolated points in each is equal. Topological spacesin the following three examples are considered assubspacesof thereal linewith the standard topology. In the topological spaceX={a,b}{\displaystyle X=\{a,b\}}with topologyτ={∅,{a},X},{\displaystyle \tau =\{\emptyset ,\{a\},X\},}the elementais an isolated point, even thoughb{\displaystyle b}belongs to theclosureof{a}{\displaystyle \{a\}}(and is therefore, in some sense, "close" toa). Such a situation is not possible in aHausdorff space. TheMorse lemmastates thatnon-degenerate critical pointsof certain functions are isolated. Consider the setFof pointsxin the real interval(0,1)such that every digitxiof theirbinaryrepresentation fulfills the following conditions: Informally, these conditions means that every digit of the binary representation ofx{\displaystyle x}that equals 1 belongs to a pair ...0110..., except for ...010... at the very end. Now,Fis an explicit set consisting entirely of isolated points but has the counter-intuitive property that itsclosureis anuncountable set.[1] Another setFwith the same properties can be obtained as follows. LetCbe the middle-thirdsCantor set, letI1,I2,I3,…,Ik,…{\displaystyle I_{1},I_{2},I_{3},\ldots ,I_{k},\ldots }be thecomponentintervals of[0,1]−C{\displaystyle [0,1]-C}, and letFbe a set consisting of one point from eachIk. Since eachIkcontains only one point fromF, every point ofFis an isolated point. However, ifpis any point in the Cantor set, then every neighborhood ofpcontains at least oneIk, and hence at least one point ofF. It follows that each point of the Cantor set lies in the closure ofF, and thereforeFhas uncountable closure.
https://en.wikipedia.org/wiki/Isolated_point
Instatistics, theprecision matrixorconcentration matrixis thematrix inverseof thecovariance matrixor dispersion matrix,P=Σ−1{\displaystyle P=\Sigma ^{-1}}.[1][2][3]Forunivariate distributions, the precision matrix degenerates into ascalarprecision, defined as thereciprocalof thevariance,p=1σ2{\displaystyle p={\frac {1}{\sigma ^{2}}}}.[4] Othersummary statisticsofstatistical dispersionalso calledprecision(orimprecision[5][6]) include the reciprocal of thestandard deviation,p=1σ{\displaystyle p={\frac {1}{\sigma }}};[3]the standard deviation itself and therelative standard deviation;[7]as well as thestandard error[8]and theconfidence interval(or its half-width, themargin of error).[9] One particular use of the precision matrix is in the context ofBayesian analysisof themultivariate normal distribution: for example, Bernardo & Smith prefer to parameterise the multivariate normal distribution in terms of the precision matrix, rather than the covariance matrix, because of certain simplifications that then arise.[10]For instance, if both thepriorand thelikelihoodhaveGaussianform, and the precision matrix of both of these exist (because their covariance matrix is full rank and thus invertible), then the precision matrix of theposteriorwill simply be the sum of the precision matrices of the prior and the likelihood. As the inverse of aHermitian matrix, the precision matrix of real-valued random variables, if it exists, ispositive definiteand symmetrical. Another reason the precision matrix may be useful is that if two dimensionsi{\displaystyle i}andj{\displaystyle j}of a multivariate normal areconditionally independent, then theij{\displaystyle ij}andji{\displaystyle ji}elements of the precision matrix are0{\displaystyle 0}. This means that precision matrices tend to be sparse when many of the dimensions are conditionally independent, which can lead to computational efficiencies when working with them. It also means that precision matrices are closely related to the idea ofpartial correlation. The precision matrix plays a central role ingeneralized least squares, compared toordinary least squares, whereP{\displaystyle P}is theidentity matrix, and toweighted least squares, whereP{\displaystyle P}is diagonal (theweight matrix). The termprecisionin this sense ("mensura praecisionis observationum") first appeared in the works ofGauss(1809) "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" (page 212). Gauss's definition differs from the modern one by a factor of2{\displaystyle {\sqrt {2}}}. He writes, for the density function of anormal distributionwith precisionh{\displaystyle h}(reciprocal of standard deviation), wherehh=h2{\displaystyle hh=h^{2}}(seemodern exponential notation). Later Whittaker & Robinson (1924) "Calculus of observations" called this quantitythe modulus (of precision), but this term has dropped out of use.[11]
https://en.wikipedia.org/wiki/Precision_(statistics)
Radio-frequency identification(RFID) useselectromagnetic fieldsto automaticallyidentifyandtracktags attached to objects. An RFID system consists of a tiny radiotranspondercalled a tag, aradio receiver, and atransmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually anidentifying inventory number, back to the reader. This number can be used to trackinventorygoods.[1] Passive tags are powered by energy from the RFID reader's interrogatingradio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters. Unlike abarcode, the tag does not need to be within theline of sightof the reader, so it may be embedded in the tracked object. RFID is one method ofautomatic identification and data capture(AIDC).[2] RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through theassembly line,[citation needed]RFID-tagged pharmaceuticals can be tracked through warehouses,[citation needed]andimplanting RFID microchipsin livestock and pets enables positive identification of animals.[3]Tags can also be used in shops to expedite checkout, and toprevent theftby customers and employees.[4] Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information withoutconsenthas raised seriousprivacyconcerns.[5]These concerns resulted in standard specifications development addressing privacy and security issues. In 2014, the world RFID market was worth US$8.89billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.[6] In 1945,Leon Theremininventedthe "Thing", a listening devicefor theSoviet Unionwhich retransmitted incident radio waves with the added audio information. Sound waves vibrated adiaphragmwhich slightly altered the shape of theresonator, which modulated the reflected radio frequency. Even though this device was acovert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energised and activated by waves from an outside source.[7] Similar technology, such as theIdentification friend or foetransponder, was routinely used by the Allies and Germany inWorld War IIto identify aircraft as friendly or hostile.Transpondersare still used by most powered aircraft.[8]An early work exploring RFID is the landmark 1948 paper by Harry Stockman,[9]who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored." Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID,[10]as it was a passive radio transponder with memory.[11]The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to theNew York Port Authorityand other potential users. It consisted of a transponder with 16bitmemory for use as atoll device. The basic Cardullo patent covers the use of radio frequency (RF), sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system,electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic chequebook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).[10] In 1973, an early demonstration ofreflected power(modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Freyman at theLos Alamos National Laboratory.[12]The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.[13] In 1983, the first patent to be associated with the abbreviation RFID was granted toCharles Walton.[14] In 1996, the first patent for a batteryless RFID passive tag with limited interference was granted to David Everett, John Frech, Theodore Wright, and Kelly Rodriguez.[15] A radio-frequency identification system usestags, orlabelsattached to the objects to be identified. Two-way radio transmitter-receivers calledinterrogatorsorreaderssend a signal to the tag and read its response.[16] RFID tags are made out of three pieces: The tag information is stored in a non-volatile memory.[17]The RFID tags includes either fixed or programmable logic for processing the transmission and sensor data, respectively.[citation needed] RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal.[17]A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission.[18] Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.[19] The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously. RFID systems can be classified by the type of tag and reader. There are 3 types:[20] Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles. Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In thisnear fieldregion, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag canbackscattera signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.[27] AnElectronic Product Code(EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like aURL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.[28] Often more than one tag will respond to a tag reader. For example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to"singulate"a particular tag, allowing its data to be read in the midst of many similar tags. In aslotted Alohasystem, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.[29] Both methods have drawbacks when used with many tags or with multiple overlapping readers.[citation needed] "Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.[30] A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupledHFRFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.[citation needed][31] Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet)[when?]suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is afuzzymethod for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.[32] RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers atBristol Universitysuccessfully glued RFID micro-transponders to liveantsin order to study their behavior.[33]This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances. Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip.[34]Manufacture is enabled by using thesilicon-on-insulator(SOI) process. These dust-sized chips can store 38-digit numbers using 128-bitRead Only Memory(ROM).[35]A major challenge is the attachment of antennas, thus limiting read range to only millimeters. In early 2020, MIT researchers demonstrated aterahertzfrequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.[36][37] An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects. RFID offers advantages over manual systems or use ofbarcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds at a time; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.[38] In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each.[39]Battery-Assisted Passive (BAP) tags were in the US$3–10 range.[citation needed] RFID can be used in a variety of applications,[40][41]such as: In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture betweenGS1and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by theAuto-ID Center.[45] RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improvesupply chain management.[citation needed]Warehouse Management System[clarification needed]incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.[46] RFID is used foritem-level taggingin retail stores. This can enable more accurate and lower-labor-cost supply chain and store inventory tracking, as is done atLululemon, though physically locating items in stores requires more expensive technology.[47]RFID tags can be used at checkout; for example, at some stores of the French retailerDecathlon, customers performself-checkoutby either using a smartphone or putting items into a bin near the register that scans the tags without having to orient each one toward the scanner.[47]Some stores use RFID-tagged items to trigger systems that provide customers with more information or suggestions, such as fitting rooms atChaneland the "Color Bar" atKendra Scottstores.[47] Item tagging can also provide protection against theft by customers and employees by usingelectronic article surveillance(EAS). Tags of different types can be physically removed with a special tool or deactivated electronically when payment is made.[48]On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is. Casinos can use RFID to authenticatepoker chips, and can selectively invalidate any chips known to be stolen.[49] RFID tags are widely used inidentification badges, replacing earliermagnetic stripecards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.[citation needed] In 2010, Vail Resorts began using UHF Passive RFID tags in ski passes.[50] Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.[citation needed][when?] Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at thePGA Golf Championships,[51]and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.[52][further explanation needed] To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.[53][when?] Yard management, shipping and freight and distribution centers use RFID tracking. In therailroadindustry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.[54] In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.[55][56] Some countries are using RFID for vehicle registration and enforcement.[57]RFID can help detect and retrieve stolen cars.[58][59] RFID is used inintelligent transportation systems. InNew York City, RFID readers are deployed at intersections to trackE-ZPasstags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used inadaptive traffic controlof the traffic lights.[60] Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.[61] At least one company has introduced RFID to identify and locate underground infrastructure assets such asgaspipelines,sewer lines, electrical cables, communication cables, etc.[62] The first RFID passports ("E-passport") were issued byMalaysiain 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.[citation needed] Other countries that insert RFID in passports include Norway (2005),[63]Japan (March 1, 2006), mostEUcountries (around 2006), Singapore (2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015)[64]and Israel (2017). Standards for RFID passports are determined by theInternational Civil Aviation Organization(ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to theISO/IEC 14443RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover. Since 2006, RFID tags included in newUnited States passportsstore the same information that is printed within the passport, and include a digital picture of the owner.[65]The United States Department of State initially stated the chips could only be read from a distance of 10 centimetres (3.9 in), but after widespread criticism and a clear demonstration that special equipment can read the test passports from 10 metres (33 ft) away,[66]the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers toskiminformation when the passport is closed. The department will also implementBasic Access Control(BAC), which functions as apersonal identification number(PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.[67] In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways. Somebike lockersare operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.[citation needed] TheZipcarcar-sharing service uses RFID cards for locking and unlocking cars and for member identification.[68] In Singapore, RFID replaces paper Season Parking Ticket (SPT).[69] RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak ofmad-cow disease, RFID has become crucial inanimal identificationmanagement. Animplantable RFID tagortranspondercan also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals.[70]TheCanadian Cattle Identification Agencybegan using RFID tags as a replacement for barcode tags. Currently, CCIA tags are used inWisconsinand by United States farmers on a voluntary basis. TheUSDAis currently developing its own program. RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.[71] Biocompatiblemicrochip implantsthat use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artistEduardo Kacin 1997.[72][73]Kac implanted the microchip live on television (and also live on the Internet) in the context of his artworkTime Capsule.[74]A year later, British professor ofcyberneticsKevin Warwickhad an RFID chip implanted in his arm by hisgeneral practitioner, George Boulos.[75][76]In 2004, the 'Baja Beach Club' operated byConrad ChaseinBarcelona[77]andRotterdamoffered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009, British scientistMark Gassonhad an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.[78] TheFood and Drug Administrationin the United States approved the use of RFID chips in humans in 2004.[79] There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms,[80]and to the emergence of an "ultimatepanopticon", a society where all citizens behave in a socially accepted manner because others might be watching.[81] On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.[82] The UFO religionUniverse Peopleis notorious online for their vocal opposition to human RFID chipping, which they claim is asaurianattempt to enslave the human race; one of their web domains is "dont-get-chipped".[83][84][85] Adoption of RFID in the medical industry has been widespread and very effective.[86]Hospitals are among the first users to combine both active and passive RFID.[87]Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification.[88]Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices.[89]TheU.S. Department of Veterans Affairs (VA)recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.[90] Since 2004, a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems; the systems are typically used for workflow and inventory management.[91][92][93]The use of RFID to prevent mix-ups betweenspermandovainIVFclinics is also being considered.[94] In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activistsKatherine AlbrechtandLiz McIntyrediscovered anFDA Warning Letterthat spelled out health risks.[95]According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility." Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as asecuritydevice, taking the place of the more traditionalelectromagnetic security strip.[96] It is estimated that over 30 million library items worldwide now contain RFID tags, including some in theVatican LibraryinRome.[97] Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on aconveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.[98]However, as of 2008, this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off,[97]but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID.[citation needed][99]In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size.[citation needed][99]Also, the tasks that RFID takes over are largely not the primary tasks of librarians.[citation needed][99]A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.[citation needed][99] Privacy concerns have been raised surrounding library use of RFID.[100][101]Because some RFID tags can be read up to 100 metres (330 ft) away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information,[102]and the tags used in the majority of libraries use a frequency only readable from approximately 10 feet (3.0 m).[96]Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.[citation needed] RFID technologies are now[when?]also implemented in end-user applications in museums.[103]An example was the custom-designed temporary research application, "eXspot", at theExploratorium, a science museum inSan Francisco,California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.[104] In 2004, school authorities in the Japanese city ofOsakamade a decision to start chipping children's clothing, backpacks, and student IDs in a primary school.[105]Later, in 2007, a school inDoncaster, England, piloted a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms.[106][when?]St Charles Sixth Form Collegein westLondon, England, starting in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly,Whitcliffe Mount SchoolinCleckheaton, England, uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already[when?]use RFID in IDs for borrowing books.[107][unreliable source?]Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.[99] RFID for timing racesbegan in the early 1990s with pigeon racing, introduced by the companyDeister Electronicsin Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.[citation needed] In races using RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error,[clarification needed]lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.[clarification needed] The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle withhook-and-loop fasteners. The chips must be about 400 mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about 1.25 m (4.1 ft).[citation needed] Passive and active RFID systems are used in off-road events such asOrienteering,Enduroand Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.[citation needed] RFID is being[when?]adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector). A number ofski resortshave adopted RFID tags to provide skiers hands-free access toski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56MHz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.[108][109][110] TheNFLin the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently, cameras stay focused on thequarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays.[111]The chip triangulates the player's position within six inches and will be used to digitallybroadcastreplays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app.[112]The RFID chips are manufactured byZebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year[when?]to track vector data.[113] RFID tags are often a complement, but not a substitute, forUniversal Product Code(UPC) orEuropean Article Number(EAN) barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airlineboarding passes. The newEPC, along with several other schemes, is widely available at reasonable cost. The storage of data associated with tracking items will require manyterabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with UPC or EAN from unique barcodes. The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale. Since around 2007, there has been increasing development in the use of RFID[when?]in thewaste managementindustry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification.[114]The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks.[115]RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT)municipal solid wasteusage-pricing models. Active RFID tags have the potential to function as low-cost remote sensors that broadcasttelemetryback to a base station. Applications of tagometry data could include sensing of road conditions by implantedbeacons, weather reports, and noise level monitoring.[116] Passive RFID tags can also report sensor data. For example, theWireless Identification and Sensing Platformis a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers. It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.[citation needed] To avoid injuries to humans and animals, RF transmission needs to be controlled.[117]A number of organizations have set standards for RFID, including theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC),ASTM International, theDASH7Alliance andEPCglobal.[118] Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry AssociationCompTIAfor certifying RFID engineers, and theInternational Air Transport Association(IATA) for luggage in airports.[citation needed] Every country can set its own rules forfrequency allocationfor RFID tags, and not all radio bands are available in all countries. These frequencies are known as theISM bands(Industrial Scientific and Medical bands). The return signal of the tag may still causeinterferencefor other radio users.[citation needed] In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power.[citation needed]In Europe, RFID and other low-power radio applications are regulated byETSIrecommendationsEN 300 220andEN 302 208, andEROrecommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz.[citation needed]Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current[when?]research. The North American UHF standard is not accepted in France as it interferes with its military bands.[citation needed]On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States' 915 MHz band, establishing an international standard environment for RFID.[citation needed] In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.[citation needed] As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.[119] Standardsthat have been made regarding RFID include: In order to ensure global interoperability of products, several organizations have set up additional standards forRFID testing. These standards include conformance, performance and interoperability tests.[citation needed] EPC Gen2 is short forEPCglobal UHF Class 1 Generation 2. EPCglobal, a joint venture betweenGS1and GS1 US, is working on international standards for the use of mostly passive RFID and theElectronic Product Code(EPC) in the identification of many items in thesupply chainfor companies worldwide. One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.[121] In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention fromIntermecthat the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free.[122]The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.[123] In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.[124] Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.[125] Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts[example needed]have been designed, mainly offered asmiddlewareperforming the filtering from noisy and redundant raw data to significant processed data.[citation needed] The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as thebarcode.[126]To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains. A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to theUnited States Department of Defense's recent[when?]adoption of RFID tags forsupply chain management.[127]More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control,[128]payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable toskimmingand eavesdropping, albeit at shorter distances.[129] A second method of prevention is by using cryptography.Rolling codesandchallenge–response authentication(CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission.[clarification needed]Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for acryptographicallycoded response from the tag. The protocols used during CRA can besymmetric, or may usepublic key cryptography.[130] While a variety of secure protocols have been suggested for RFID tags, in order to support long read range at low cost, many RFID tags have barely enough power available to support very low-power and therefore simple security protocols such ascover-coding.[131] Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy.[132]Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package.[130]Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption,[133]as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002.[134]There are also concerns that the database structure ofObject Naming Servicemay be susceptible to infiltration, similar todenial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.[135] Microchip–induced tumours have been noted during animal trials.[136][137] In an effort to prevent the passive "skimming" of RFID-enabled cards or passports, the U.S.General Services Administration(GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves.[138]For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program.[139]The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder.[140]Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use ofEMVchips rather than RFID makes this sort of theft rare.[141][142] There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating aFaraday cage, does work.[143]Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.[144] Shielding effectiveness depends on the frequency being used.Low-frequencyLowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads.High frequencyHighFID tags (13.56 MHz—smart cardsand access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface.UHFUltra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.[145] The use of RFID has engendered considerable controversy and someconsumer privacyadvocates have initiated productboycotts. Consumer privacy expertsKatherine AlbrechtandLiz McIntyreare two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:[citation needed] Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home; thus, they may be used forsurveillanceand other purposes unrelated to their supply chain inventory functions.[146] The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works.[147]They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag. The concerns raised may be addressed in part by use of theClipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested byIBMresearchersPaul Moskowitzand Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling. However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.[148][149] In January 2004, privacy advocates from CASPIAN and the German privacy groupFoeBuDwere invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customerloyalty cardscontained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.[150] During the UNWorld Summit on the Information Society(WSIS) in November 2005,Richard Stallman, the founder of thefree software movement, protested the use of RFID security cards by covering his card with aluminum foil.[151] In 2004–2005, theFederal Trade Commissionstaff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.[152] RFID was one of the main topics of the 2006Chaos Communication Congress(organized by theChaos Computer ClubinBerlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The groupmonochromstaged a "Hack RFID" song.[153] Some individuals have grown to fear the loss of rights due to RFID human implantation. By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from aUS passport cardby using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.[154] According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy.[155]In the bookSpyChips: How Major Corporations and Government Plan to Track Your Every Moveby Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".[156] According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven;[157]however, some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags andEPCtags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.[158] Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag. UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password.[159]Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.[160]
https://en.wikipedia.org/wiki/RFID
Inlinguistics, theempty category principle(ECP) was proposed inNoam Chomsky's syntactic framework ofgovernment and binding theory. The ECP is supposed to be a universal syntactic constraint that requires certain types ofempty categories, namelytraces, to be properlygoverned. ECP is a principle oftransformational grammarby which traces must be visible, i.e. they must be identifiable as empty positions in the surface structure, similar to the principle of reconstruction fordeletion. Thus an empty category is in a positionsubcategorizedfor by a verb. In government and binding theory this is known asproper government. Proper government occurs either if the empty position is governed by alexical category(especially if it is not a subject) (theta-government) or if it is coindexed with amaximal projectionwhich governs it (antecedent-government). The ECP has been revised many times and is now a central part of government and binding theory.[1] In spite of its name, the ECP applies to only two of the four types of null DPs. Specifically, it applies to DP- and Wh-traces, but not PRO andpro. The chief function of the ECP is to place constraints on the movement of categories by the rule ofmove α; it effectively allows a tree structure to "remember" what has happened at earlier stages of a derivation, and it can be seen as GB's version of the older derivational constraints.[2] Formally, the ECP states that: The ECP is a way of accounting for, among other things, the empirical fact that it is generally more difficult to move up awh-wordfrom a subject position than from an object position. The intermediatetracesmust be deleted because they cannot be properly governed; theta-government is impossible because of the position they occupy, Spec-CP; the only possible antecedent-governor might be an overt NP (a wh-word), but the Minimality Condition would always be violated because of the tensed I (which must be present in allmatrix clauses), the tensed I wouldc-commandthe intermediate trace but it would not c-command the wh-word. So we have to say that intermediate traces must be deleted atlogical formso that they can avoid the ECP. In the case ofobject extraction(the trace is a complement of VP), theta-government is the only possible option. In the case of subject extraction (the trace in Spec-IP), antecedent-government is the only possible option. If the trace is in Spec-IP and we have an overtcomplementizer(such asthat), the sentence is ungrammatical because the ECP is violated. The closest potential governor would be the complementizer, which cannot antecedent-govern the trace because it is not coindexed with it (and theta-government is impossible since trace is in Spec-IP). For example, in the sentenceWho do you think (that) John will invite?the ECP works in the following way (the structure is given for the embedded clause only):
https://en.wikipedia.org/wiki/Empty_Category_Principle
Probabilistic Signature Scheme(PSS) is acryptographicsignature schemedesigned byMihir BellareandPhillip Rogaway.[1] RSA-PSS is an adaptation of their work and is standardized as part ofPKCS#1 v2.1. In general, RSA-PSS should be used as a replacement for RSA-PKCS#1 v1.5. PSS was specifically developed to allow modern methods of security analysis to prove that its security directly relates to that of theRSA problem. There is no such proof for the traditional PKCS#1 v1.5 scheme.
https://en.wikipedia.org/wiki/Probabilistic_signature_scheme
Aknight's touris a sequence of moves of aknighton achessboardsuch that the knight visits every square exactly once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is "closed", or "re-entrant"; otherwise, it is "open".[1][2] Theknight's tour problemis themathematical problemof finding a knight's tour. Creating aprogramto find a knight's tour is a common problem given tocomputer sciencestudents.[3]Variations of the knight's tour problem involve chessboards of different sizes than the usual8 × 8, as well as irregular (non-rectangular) boards. The knight's tour problem is an instance of the more generalHamiltonian path problemingraph theory. The problem of finding a closed knight's tour is similarly an instance of theHamiltonian cycle problem. Unlike the general Hamiltonian path problem, the knight's tour problem can be solved inlinear time.[4] The earliest known reference to the knight's tour problem dates back to the 9th century AD. InRudrata'sKavyalankara[5](5.15), a Sanskrit work on Poetics, the pattern of a knight's tour on a half-board has been presented as an elaborate poetic figure (citra-alaṅkāra) called theturagapadabandhaor 'arrangement in the steps of a horse'. The same verse in four lines of eight syllables each can be read from left to right or by following the path of the knight on tour. Since theIndic writing systemsused for Sanskrit are syllabic, each syllable can be thought of as representing a square on a chessboard. Rudrata's example is as follows: transliterated: For example, the first line can be read from left to right or by moving from the first square to the second line, third syllable (2.3) and then to 1.5 to 2.7 to 4.8 to 3.6 to 4.4 to 3.2. TheSri Vaishnavapoet and philosopherVedanta Desika, during the 14th century, in his 1,008-verse magnum opus praising the deityRanganatha's divine sandals ofSrirangam,Paduka Sahasram(in chapter 30:Chitra Paddhati) has composed two consecutiveSanskritverses containing 32 letters each (inAnushtubhmeter) where the second verse can be derived from the first verse by performing a Knight's tour on a4 × 8board, starting from the top-left corner.[6]The transliterated 19th verse is as follows: (1) (30) (9) (20) (3) (24) (11) (26) (16) (19) (2) (29) (10) (27) (4) (23) (31) (8) (17) (14) (21) (6) (25) (12) (18) (15) (32) (7) (28) (13) (22) (5) The 20th verse that can be obtained by performing Knight's tour on the above verse is as follows: sThi thA sa ma ya rA ja thpA ga tha rA mA dha kE ga vi | dhu ran ha sAm sa nna thA dhA sA dhyA thA pa ka rA sa rA || It is believed that Desika composed all 1,008 verses (including the specialChaturanga Turanga Padabandhammentioned above) in a single night as a challenge.[7] A tour reported in the fifth book of Bhagavantabaskaraby by Bhat Nilakantha, a cyclopedic work in Sanskrit on ritual, law and politics, written either about 1600 or about 1700 describes three knight's tours. The tours are not only reentrant but also symmetrical, and the verses are based on the same tour, starting from different squares.[8]Nilakantha's work is an extraordinary achievement being a fully symmetric closed tour, predating the work of Euler (1759) by at least 60 years. After Nilakantha, one of the first mathematicians to investigate the knight's tour wasLeonhard Euler. The first procedure for completing the knight's tour was Warnsdorf's rule, first described in 1823 by H. C. von Warnsdorf. In the 20th century, theOulipogroup of writers used it, among many others. The most notable example is the10 × 10knight's tour which sets the order of the chapters inGeorges Perec's novelLife a User's Manual. The sixth game of theWorld Chess Championship 2010betweenViswanathan AnandandVeselin Topalovsaw Anand making 13 consecutive knight moves (albeit using both knights); online commentators jested that Anand was trying to solve the knight's tour problem during the game. Schwenk[10]proved that for anym×nboard withm≤n, a closed knight's tour is always possibleunlessone or more of these three conditions are met: Cullet al.and Conradet al.proved that on any rectangular board whose smaller dimension is at least 5, there is a (possibly open) knight's tour.[4][11]For anym×nboard withm≤n, a (possibly open) knight's tour is always possibleunlessone or more of these three conditions are met: On an8 × 8board, there are exactly 26,534,728,821,064directedclosed tours (i.e. two tours along the same path that travel in opposite directions are counted separately, as arerotationsandreflections).[14][15][16]The number ofundirectedclosed tours is half this number, since every tour can be traced in reverse. There are 9,862 undirected closed tours on a6 × 6board.[17] There are several ways to find a knight's tour on a given board with a computer. Some of these methods arealgorithms, while others areheuristics. Abrute-force searchfor a knight's tour is impractical on all but the smallest boards.[18]On an8 × 8board, for instance, there are13,267,364,410,532knight's tours,[14]and a much greater number of sequences of knight moves of the same length. It is well beyond the capacity of modern computers (or networks of computers) to perform operations on such a large set. However, the size of this number is not indicative of the difficulty of the problem, which can be solved "by using human insight and ingenuity ... without much difficulty."[18] By dividing the board into smaller pieces, constructing tours on each piece, and patching the pieces together, one can construct tours on most rectangular boards inlinear time– that is, in a time proportional to the number of squares on the board.[11][19] Warnsdorf's rule is aheuristicfor finding a single knight's tour. The knight is moved so that it always proceeds to the square from which the knight will have thefewestonward moves. When calculating the number of onward moves for each candidate square, we do not count moves that revisit any square already visited. It is possible to have two or more choices for which the number of onward moves is equal; there are various methods for breaking such ties, including one devised by Pohl[20]and another by Squirrel and Cull.[21] This rule may also more generally be applied to any graph. In graph-theoretic terms, each move is made to the adjacent vertex with the leastdegree.[22]Although theHamiltonian path problemisNP-hardin general, on many graphs that occur in practice this heuristic is able to successfully locate a solution inlinear time.[20]The knight's tour is such a special case.[23] Theheuristicwas first described in "Des Rösselsprungs einfachste und allgemeinste Lösung" by H. C. von Warnsdorf in 1823.[23] A computer program that finds a knight's tour for any starting position using Warnsdorf's rule was written by Gordon Horsington and published in 1984 in the bookCentury/Acorn User Book of Computer Puzzles.[24] The knight's tour problem also lends itself to being solved by aneural networkimplementation.[25]The network is set up such that every legal knight's move is represented by aneuron, and each neuron is initialized randomly to be either "active" or "inactive" (output of 1 or 0), with 1 implying that the neuron is part of the solution. Each neuron also has a state function (described below) which is initialized to 0. When the network is allowed to run, each neuron can change its state and output based on the states and outputs of its neighbors (those exactly one knight's move away) according to the following transition rules: wheret{\displaystyle t}represents discrete intervals of time,U(Ni,j){\displaystyle U(N_{i,j})}is the state of the neuron connecting squarei{\displaystyle i}to squarej{\displaystyle j},V(Ni,j){\displaystyle V(N_{i,j})}is the output of the neuron fromi{\displaystyle i}toj{\displaystyle j}, andG(Ni,j){\displaystyle G(N_{i,j})}is the set of neighbors of the neuron. Although divergent cases are possible, the network should eventually converge, which occurs when no neuron changes its state from timet{\displaystyle t}tot+1{\displaystyle t+1}. When the network converges, either the network encodes a knight's tour or a series of two or more independent circuits within the same board.
https://en.wikipedia.org/wiki/Knight%27s_tour