text
stringlengths
11
320k
source
stringlengths
26
161
During the 2010s, international media reports revealed new operational details about the Anglophone cryptographic agencies'global surveillance[1]of both foreign and domestic nationals. The reports mostly relate totop secretdocumentsleakedby ex-NSAcontractorEdward Snowden. The documents consist of intelligence files relating to the U.S. and otherFive Eyescountries.[2]In June 2013, the first of Snowden's documents were published, with further selected documents released to various news outlets through the year. These media reports disclosed severalsecret treatiessigned by members of theUKUSA communityin their efforts to implementglobal surveillance. For example,Der Spiegelrevealed how theGermanFederal Intelligence Service(German:Bundesnachrichtendienst; BND) transfers "massive amounts of intercepted data to the NSA",[3]while Swedish Television revealed theNational Defence Radio Establishment(FRA) provided the NSA with data from itscable collection, under a secret agreement signed in 1954 for bilateral cooperation on surveillance.[4]Other security and intelligence agencies involved in the practice ofglobal surveillanceinclude those in Australia (ASD), Britain (GCHQ), Canada (CSE), Denmark (PET), France (DGSE), Germany (BND), Italy (AISE), the Netherlands (AIVD), Norway (NIS), Spain (CNI), Switzerland (NDB), Singapore (SID) as well as Israel (ISNU), which receives raw, unfiltered data of U.S. citizens from the NSA.[5][6][7][8][9][10][11][12] On June 14, 2013, United StatesprosecutorschargedEdward Snowden withespionageandtheft of government property. In late July 2013, he was granted a one-year temporaryasylumby the Russian government,[13]contributing to a deterioration ofRussia–United States relations.[14][15]Toward the end of October 2013, the British Prime MinisterDavid CameronwarnedThe Guardiannot to publishany more leaks, or it will receive aDA-Notice.[16]In November 2013, a criminal investigation of the disclosure was undertaken by Britain'sMetropolitan Police Service.[17]In December 2013,The GuardianeditorAlan Rusbridgersaid: "We have published I think 26 documents so far out of the 58,000 we've seen."[18] The extent to which the media reports responsibly informed the public is disputed. In January 2014, Obama said that "the sensational way in which these disclosures have come out has often shed more heat than light"[19]and critics such asSean Wilentzhave noted that many of the Snowden documents do not concern domestic surveillance.[20]The US & British Defense establishment weigh the strategic harm in the period following the disclosures more heavily than their civic public benefit. In its first assessment of these disclosures,the Pentagonconcluded that Snowden committed the biggest "theft" of U.S. secrets in thehistory of the United States.[21]SirDavid Omand, a former director ofGCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever".[22] Snowden obtained the documents while working forBooz Allen Hamilton, one of the largest contractors for defense and intelligence in the United States.[2]The initial simultaneous publication in June 2013 byThe Washington PostandThe Guardian[23]continued throughout 2013. A small portion of the estimated full cache of documents was later published by other media outlets worldwide, most notablyThe New York Times(United States), theCanadian Broadcasting Corporation, theAustralian Broadcasting Corporation,Der Spiegel(Germany),O Globo(Brazil),Le Monde(France),L'espresso(Italy),NRC Handelsblad(the Netherlands),Dagbladet(Norway),El País(Spain), andSveriges Television(Sweden).[24] Barton Gellman, aPulitzer Prize–winning journalist who ledThe Washington Post's coverage of Snowden's disclosures, summarized the leaks as follows: Taken together, the revelations have brought to light aglobal surveillancesystem that cast off many of its historical restraints after theattacks of Sept. 11, 2001. Secret legal authorities empowered the NSA to sweep in the telephone, Internet and location records of whole populations. The disclosure revealed specific details of the NSA's close cooperation with U.S. federal agencies such as theFederal Bureau of Investigation(FBI)[26][27]and theCentral Intelligence Agency(CIA),[28][29]in addition to the agency's previously undisclosed financial payments to numerous commercial partners and telecommunications companies,[30][31][32]as well as its previously undisclosed relationships with international partners such as Britain,[33][34]France,[10][35]Germany,[3][36]and itssecret treatieswith foreign governments that were recently established for sharing intercepted data of each other's citizens.[5][37][38][39]The disclosures were made public over the course of several months since June 2013, by the press in several nations from the trove leaked by the former NSA contractor Edward J. Snowden,[40]who obtained the trove while working forBooz Allen Hamilton.[2] George Brandis, theAttorney-General of Australia, asserted that Snowden's disclosure is the "most serious setback for Western intelligence since theSecond World War."[41] As of December 2013[update], global surveillance programs include: The NSA was also getting data directly fromtelecommunications companiescode-named Artifice (Verizon), Lithium (AT&T), Serenade, SteelKnight, and X. The real identities of the companies behind these code names were not included in theSnowden document dumpbecause they were protected asExceptionally Controlled Informationwhich prevents wide circulation even to those (like Snowden) who otherwise have the necessary security clearance.[64][65] Although the exact size of Snowden's disclosure remains unknown, the following estimates have been put up by various government officials: As a contractor of the NSA, Snowden was granted access to U.S. government documents along withtop secretdocuments of severalalliedgovernments, via the exclusiveFive Eyesnetwork.[68]Snowden claims that he currently does not physically possess any of these documents, having surrendered all copies to journalists he met inHong Kong.[69] According to his lawyer, Snowden has pledged not to release any documents while in Russia, leaving the responsibility for further disclosures solely to journalists.[70]As of 2014, the following news outlets have accessed some of the documents provided by Snowden:Australian Broadcasting Corporation,Canadian Broadcasting Corporation,Channel 4,Der Spiegel,El País,El Mundo,L'espresso,Le Monde,NBC,NRC Handelsblad,Dagbladet,O Globo,South China Morning Post,Süddeutsche Zeitung,Sveriges Television,The Guardian,The New York Times, andThe Washington Post. In the 1970s, NSA analystPerry Fellwock(under the pseudonym "Winslow Peck") revealed the existence of theUKUSA Agreement, which forms the basis of theECHELONnetwork, whose existence was revealed in 1988 byLockheedemployee Margaret Newsham.[71][72]Months before theSeptember 11 attacksand during its aftermath, further details of theglobal surveillanceapparatus were provided by various individuals such as the formerMI5officialDavid Shaylerand the journalistJames Bamford,[73][74]who were followed by: In the aftermath of Snowden's revelations,The Pentagonconcluded that Snowden committed the biggest theft of U.S. secrets in thehistory of the United States.[21]In Australia, the coalition government described the leaks as the most damaging blow dealt toAustralian intelligencein history.[41]SirDavid Omand, a former director of GCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever".[22] In April 2012, NSA contractorEdward Snowdenbegan downloading documents.[87]That year, Snowden had made his first contact with journalistGlenn Greenwald, then employed byThe Guardian, and he contacted documentary filmmakerLaura Poitrasin January 2013.[88][89] In May 2013, Snowden went on temporary leave from his position at the NSA, citing the pretext of receiving treatment for hisepilepsy. He traveled fromHawaiitoHong Kongat the end of May.[90][91]Greenwald, Poitras andThe Guardian's defence and intelligence correspondentEwen MacAskillflew to Hong Kong for meeting Snowden. After the U.S.-based editor ofThe Guardian,Janine Gibson, held several meetings in New York City, she decided that Greenwald, Poitras and theGuardian's defence and intelligence correspondent Ewen MacAskill would fly to Hong Kong to meet Snowden. On June 5, in the first media report based on the leaked material,[92]The Guardianexposed atop secretcourt order showing that the NSA had collected phone records from over 120 millionVerizon subscribers.[93]Under the order, the numbers of both parties on a call, as well as the location data, unique identifiers, time of call, and duration of call were handed over to the FBI, which turned over the records to the NSA.[93]According toThe Wall Street Journal, the Verizon order is part of a controversial data program, which seeks to stockpile records on all calls made in the U.S., but does not collect information directly fromT-Mobile USandVerizon Wireless, in part because of their foreign ownership ties.[94] On June 6, 2013, the second media disclosure, the revelation of thePRISM surveillance program(which collects the e-mail, voice, text and video chats of foreigners and an unknown number of Americans from Microsoft, Google, Facebook, Yahoo, Apple and other tech giants),[95][96][97][98]was published simultaneously byThe GuardianandThe Washington Post.[86][99] Der Spiegelrevealed NSA spying on multiple diplomatic missions of theEuropean Unionand theUnited Nations Headquartersin New York.[100][101]During specific episodes within a four-year period, the NSA hacked several Chinese mobile-phone companies,[102]theChinese University of Hong KongandTsinghua Universityin Beijing,[103]and the Asian fiber-optic network operatorPacnet.[104]OnlyAustralia, Canada, New Zealand and the UKare explicitly exempted from NSA attacks, whose main target in the European Union is Germany.[105]A method of bugging encrypted fax machines used at an EU embassy is codenamedDropmire.[106] During the2009 G-20 London summit, the British intelligence agencyGovernment Communications Headquarters(GCHQ) intercepted the communications of foreign diplomats.[107]In addition, GCHQ has been intercepting and storing mass quantities of fiber-optic traffic viaTempora.[108]Two principal components of Tempora are called "Mastering the Internet" (MTI) and "Global Telecoms Exploitation".[109]The data is preserved for three days whilemetadatais kept for thirty days.[110]Data collected by theGCHQunder Tempora is shared with theNational Security Agencyin the United States.[109] From 2001 to 2011, the NSA collected vast amounts of metadata records detailing the email and internet usage of Americans viaStellar Wind,[111]which was later terminated due to operational and resource constraints. It was subsequently replaced by newer surveillance programs such as ShellTrumpet, which "processed its one trillionth metadata record" by the end of December 2012.[112] The NSA follows specific procedures to target non-U.S. persons[113]and to minimize data collection from U.S. persons.[114]These court-approved policies allow the NSA to:[115][116] According toBoundless Informant, over 97 billion pieces of intelligence were collected over a 30-day period ending in March 2013. Out of all 97 billion sets of information, about 3 billiondata setsoriginated from U.S. computer networks[117]and around 500 million metadata records were collected from German networks.[118] In August 2013, it was revealed that theBundesnachrichtendienst(BND) of Germany transfers massive amounts of metadata records to the NSA.[119] Der Spiegeldisclosed that out of all27 member statesof the European Union, Germany is the most targeted due to the NSA's systematic monitoring and storage of Germany's telephone and Internet connection data. According to the magazine the NSA stores data from around half a billion communications connections in Germany each month. This data includes telephone calls, emails, mobile-phone text messages and chat transcripts.[120] The NSA gained massive amounts of information captured from the monitored data traffic in Europe. For example, in December 2012, the NSA gathered on an average day metadata from some 15 million telephone connections and 10 million Internet datasets. The NSA also monitored the European Commission in Brussels and monitored EU diplomatic Facilities in Washington and at the United Nations by placing bugs in offices as well as infiltrating computer networks.[121] The U.S. government made as part of itsUPSTREAM data collection programdeals with companies to ensure that it had access to and hence the capability to surveil undersea fiber-optic cables which deliver e-mails, Web pages, other electronic communications and phone calls from one continent to another at the speed of light.[122][123] According to the Brazilian newspaperO Globo, the NSA spied on millions of emails and calls of Brazilian citizens,[124][125]while Australia and New Zealand have been involved in the joint operation of the NSA's global analytical systemXKeyscore.[126][127]Among the numerousalliedfacilities contributing to XKeyscore are four installations in Australia and one in New Zealand: O Globoreleased an NSA document titled "Primary FORNSAT Collection Operations", which revealed the specific locations and codenames of theFORNSATintercept stations in 2002.[128] According to Edward Snowden, the NSA has established secret intelligence partnerships with manyWestern governments.[127]The Foreign Affairs Directorate (FAD) of the NSA is responsible for these partnerships, which, according to Snowden, are organized such that foreign governments can "insulate their political leaders" from public outrage in the event that theseglobal surveillancepartnerships are leaked.[129] In an interview published byDer Spiegel, Snowden accused the NSA of being "in bed together with the Germans".[130]The NSA granted the German intelligence agenciesBND(foreign intelligence) andBfV(domestic intelligence) access to its controversialXKeyscoresystem.[131]In return, the BND turned over copies of two systems named Mira4 and Veras, reported to exceed the NSA's SIGINT capabilities in certain areas.[3]Every day, massive amounts of metadata records are collected by the BND and transferred to the NSA via theBad Aibling StationnearMunich, Germany.[3]In December 2012 alone, the BND handed over 500 million metadata records to the NSA.[132][133] In a document dated January 2013, the NSA acknowledged the efforts of the BND to undermineprivacy laws: TheBNDhas been working to influence the German government to relax interpretation of the privacy laws to provide greater opportunities of intelligence sharing.[133] According to an NSA document dated April 2013, Germany has now become the NSA's "most prolific partner".[133]Under a section of a separate document leaked by Snowden titled "Success Stories", the NSA acknowledged the efforts of the German government to expand the BND's international data sharing with partners: The German government modifies its interpretation of theG-10 privacy law... to afford the BND more flexibility in sharing protected information with foreign partners.[49] In addition, the German government was well aware of the PRISM surveillance program long before Edward Snowden made details public. According to Angela Merkel's spokesmanSteffen Seibert, there are two separate PRISM programs – one is used by the NSA and the other is used byNATOforces inAfghanistan.[134]The two programs are "not identical".[134] The Guardianrevealed further details of the NSA'sXKeyscoretool, which allows government analysts to search through vast databases containing emails, online chats and the browsing histories of millions of individuals without prior authorization.[135][136][137]Microsoft "developed a surveillance capability to deal" with the interception of encrypted chats onOutlook.com, within five months after the service went into testing. NSA had access to Outlook.com emails because "Prism collects this data prior to encryption."[45] In addition, Microsoft worked with the FBI to enable the NSA to gain access to itscloud storage serviceSkyDrive. An internal NSA document dating from August 3, 2012, described the PRISM surveillance program as a "team sport".[45] TheCIA'sNational Counterterrorism Centeris allowed to examine federal government files for possible criminal behavior, even if there is no reason to suspect U.S. citizens of wrongdoing. Previously the NTC was barred to do so, unless a person was a terror suspect or related to an investigation.[138] Snowden also confirmed thatStuxnetwas cooperatively developed by the United States and Israel.[139]In a report unrelated to Edward Snowden, the French newspaperLe Monderevealed that France'sDGSEwas also undertaking mass surveillance, which it described as "illegal and outside any serious control".[140][141] Documents leaked by Edward Snowden that were seen bySüddeutsche Zeitung(SZ) andNorddeutscher Rundfunkrevealed that severaltelecomoperators have played a key role in helping the British intelligence agencyGovernment Communications Headquarters(GCHQ) tap into worldwidefiber-optic communications. The telecom operators are: Each of them were assigned a particular area of the internationalfiber-optic networkfor which they were individually responsible. The following networks have been infiltrated by GCHQ:TAT-14(EU-UK-US),Atlantic Crossing 1(EU-UK-US),Circe South(France-UK),Circe North(Netherlands-UK),Flag Atlantic-1,Flag Europa-Asia,SEA-ME-WE 3(Southeast Asia-Middle East-Western Europe),SEA-ME-WE 4(Southeast Asia-Middle East-Western Europe), Solas (Ireland-UK), UK-France 3, UK-Netherlands 14,ULYSSES(EU-UK),Yellow(UK-US) andPan European Crossing(EU-UK).[143] Telecommunication companies who participated were "forced" to do so and had "no choice in the matter".[143]Some of the companies were subsequently paid by GCHQ for their participation in the infiltration of the cables.[143]According to the SZ, GCHQ has access to the majority of internet and telephone communications flowing throughout Europe, can listen to phone calls, read emails and text messages, see which websites internet users from all around the world are visiting. It can also retain and analyse nearly the entire European internet traffic.[143] GCHQ is collecting all data transmitted to and from the United Kingdom and Northern Europe via the undersea fibre optic telecommunications cableSEA-ME-WE 3. TheSecurity and Intelligence Division(SID) of Singapore co-operates with Australia in accessing and sharing communications carried by the SEA-ME-WE-3 cable. TheAustralian Signals Directorate(ASD) is also in a partnership with British, American and Singaporean intelligence agencies to tap undersea fibre optic telecommunications cables that link Asia, the Middle East and Europe and carry much of Australia's international phone and internet traffic.[144] The U.S. runs a top-secret surveillance program known as theSpecial Collection Service(SCS), which is based in over 80 U.S. consulates and embassies worldwide.[145][146]The NSA hacked the United Nations' video conferencing system in Summer 2012 in violation of a UN agreement.[145][146] The NSA is not just intercepting the communications of Americans who are in direct contact with foreigners targeted overseas, but also searching the contents of vast amounts of e-mail and text communications into and out of the country by Americans who mention information about foreigners under surveillance.[147]It also spied onAl Jazeeraand gained access to its internal communications systems.[148] The NSA has built a surveillance network that has the capacity to reach roughly 75% of all U.S. Internet traffic.[149][150][151]U.S. Law-enforcement agencies use tools used by computer hackers to gather information on suspects.[152][153]An internal NSA audit from May 2012 identified 2776 incidents i.e. violations of the rules or court orders for surveillance of Americans and foreign targets in the U.S. in the period from April 2011 through March 2012, while U.S. officials stressed that any mistakes are not intentional.[154][155][156][157] The FISA Court that is supposed to provide critical oversight of the U.S. government's vast spying programs has limited ability to do so and it must trust the government to report when it improperly spies on Americans.[158]A legal opinion declassified on August 21, 2013, revealed that the NSA intercepted for three years as many as 56,000 electronic communications a year of Americans not suspected of having links to terrorism, before FISA court that oversees surveillance found the operation unconstitutional in 2011.[159][160][161][162]Under the Corporate Partner Access project, major U.S. telecommunications providers receive hundreds of millions of dollars each year from the NSA.[163]Voluntary cooperation between the NSA and the providers of global communications took off during the 1970s under the cover nameBLARNEY.[163] A letter drafted by the Obama administration specifically to inform Congress of the government's mass collection of Americans' telephone communications data was withheld from lawmakers by leaders of the House Intelligence Committee in the months before a key vote affecting the future of the program.[164][165] The NSA paid GCHQ over £100 Million between 2009 and 2012, in exchange for these funds GCHQ "must pull its weight and be seen to pull its weight." Documents referenced in the article explain that the weaker British laws regarding spying are "a selling point" for the NSA. GCHQ is also developing the technology to "exploit any mobile phone at any time."[166]The NSA has under a legal authority a secret backdoor into its databases gathered from large Internet companies enabling it to search for U.S. citizens' email and phone calls without a warrant.[167][168] ThePrivacy and Civil Liberties Oversight Boardurged the U.S. intelligence chiefs to draft stronger US surveillance guidelines on domestic spying after finding that several of those guidelines have not been updated up to 30 years.[169][170]U.S. intelligence analysts have deliberately broken rules designed to prevent them from spying on Americans by choosing to ignore so-called "minimisation procedures" aimed at protecting privacy[171][172]and used the NSA's agency's enormous eavesdropping power to spy on love interests.[173] After the U.S.Foreign Secret Intelligence Courtruled in October 2011 that some of the NSA's activities were unconstitutional, the agency paid millions of dollars to major internet companies to cover extra costs incurred in their involvement with the PRISM surveillance program.[174] "Mastering the Internet" (MTI) is part of theInterception Modernisation Programme(IMP) of the British government that involves the insertion of thousands of DPI (deep packet inspection) "black boxes" at variousinternet service providers, as revealed by the British media in 2009.[175] In 2013, it was further revealed that the NSA had made a £17.2  million financial contribution to the project, which is capable of vacuuming signals from up to 200 fibre-optic cables at all physical points of entry into Great Britain.[176] TheGuardianandThe New York Timesreported on secret documents leaked by Snowden showing that the NSA has been in "collaboration with technology companies" as part of "an aggressive, multipronged effort" to weaken theencryptionused in commercial software, and GCHQ has a team dedicated to cracking "Hotmail, Google, Yahoo and Facebook" traffic.[183] Germany's domestic security agencyBundesverfassungsschutz(BfV) systematically transfers the personal data of German residents to the NSA, CIA and seven other members of theUnited States Intelligence Community, in exchange for information and espionage software.[184][185][186]Israel, Sweden and Italy are also cooperating with American and British intelligence agencies. Under a secret treaty codenamed "Lustre", French intelligence agencies transferred millions of metadata records to the NSA.[62][63][187][188] The Obama Administration secretly won permission from the Foreign Intelligence Surveillance Court in 2011 to reverse restrictions on the National Security Agency's use of intercepted phone calls and e-mails, permitting the agency to search deliberately for Americans' communications in its massive databases. The searches take place under a surveillance program Congress authorized in 2008 under Section 702 of the Foreign Intelligence Surveillance Act. Under that law, the target must be a foreigner "reasonably believed" to be outside the United States, and the court must approve the targeting procedures in an order good for one year. But a warrant for each target would thus no longer be required. That means that communications with Americans could be picked up without a court first determining that there is probable cause that the people they were talking to were terrorists, spies or "foreign powers." The FISC extended the length of time that the NSA is allowed to retain intercepted U.S. communications from five years to six years with an extension possible for foreign intelligence or counterintelligence purposes. Both measures were done without public debate or any specific authority from Congress.[189] A special branch of the NSA called "Follow the Money" (FTM) monitors international payments, banking and credit card transactions and later stores the collected data in the NSA's own financial databank "Tracfin".[190]The NSA monitored the communications of Brazil's presidentDilma Rousseffand her top aides.[191]The agency also spied on Brazil's oil firmPetrobrasas well as French diplomats, and gained access to the private network of theMinistry of Foreign Affairs of Franceand theSWIFTnetwork.[192] In the United States, the NSA uses the analysis of phone call and e-mail logs of American citizens to create sophisticated graphs of their social connections that can identify their associates, their locations at certain times, their traveling companions and other personal information.[193]The NSA routinely shares raw intelligence data with Israel without first sifting it to remove information about U.S. citizens.[5][194] In an effort codenamed GENIE, computer specialists can control foreign computer networks using "covert implants," a form of remotely transmitted malware on tens of thousands of devices annually.[195][196][197][198]As worldwide sales ofsmartphonesbegan exceeding those offeature phones, the NSA decided to take advantage of the smartphone boom. This is particularly advantageous because the smartphone combines amyriadof data that would interest an intelligence agency, such as social contacts, user behavior, interests, location, photos and credit card numbers and passwords.[199] An internal NSA report from 2010 stated that the spread of the smartphone has been occurring "extremely rapidly"—developments that "certainly complicate traditional target analysis."[199]According to the document, the NSA has set uptask forcesassigned to several smartphone manufacturers andoperating systems, includingApple Inc.'siPhoneandiOSoperating system, as well asGoogle'sAndroidmobile operating system.[199]Similarly, Britain'sGCHQassigned a team to study and crack theBlackBerry.[199] Under the heading "iPhone capability", the document notes that there are smaller NSA programs, known as "scripts", that can perform surveillance on 38 different features of theiOS 3andiOS 4operating systems. These include themappingfeature,voicemailand photos, as well asGoogle Earth, Facebook andYahoo! Messenger.[199] On September 9, 2013, an internal NSA presentation on iPhone Location Services was published byDer Spiegel. One slide shows scenes from Apple's1984-themed television commercialalongside the words "Who knew in 1984..."; another shows Steve Jobs holding an iPhone, with the text "...that this would be big brother..."; and a third shows happy consumers with their iPhones, completing the question with "...and the zombies would be paying customers?"[200] On October 4, 2013,The Washington PostandThe Guardianjointly reported that the NSA and GCHQ had made repeated attempts to spy on anonymous Internet users who have been communicating in secret via the anonymity networkTor. Several of these surveillance operations involved the implantation of malicious code into the computers of Tor users who visit particular websites. The NSA and GCHQ had partly succeeded in blocking access to the anonymous network, diverting Tor users to insecure channels. The government agencies were also able to uncover the identity of some anonymous Internet users.[201][202][203][204] TheCommunications Security Establishment(CSE) has been using a program called Olympia to map the communications of Brazil'sMines and Energy Ministryby targeting the metadata of phone calls and emails to and from the ministry.[205][206] The Australian Federal Government knew about the PRISM surveillance program months before Edward Snowden made details public.[207][208] The NSA gathered hundreds of millions of contact lists from personal e-mail and instant messaging accounts around the world. The agency did not target individuals. Instead it collected contact lists in large numbers that amount to a sizable fraction of the world's e-mail and instant messaging accounts. Analysis of that data enables the agency to search for hidden connections and to map relationships within a much smaller universe of foreign intelligence targets.[209][210][211][212] The NSA monitored the public email account of former Ex-Mexican presidentFelipe Calderón(thus gaining access to the communications of high-ranking cabinet members), the emails of several high-ranking members of Mexico's security forces and text and the mobile phone communication of Ex-Mexican presidentEnrique Peña Nieto.[213][214]The NSA tries to gather cellular and landline phone numbers—often obtained from American diplomats—for as many foreign officials as possible. The contents of the phone calls are stored in computer databases that can regularly be searched using keywords.[215][216] The NSA has been monitoring telephone conversations of 35 world leaders.[217]The U.S. government's first public acknowledgment that it tapped the phones of world leaders was reported on October 28, 2013, by the Wall Street Journal after an internal U.S. government review turned up NSA monitoring of some 35 world leaders.[218]GCHQhas tried to keep its mass surveillance program a secret because it feared a "damaging public debate" on the scale of its activities which could lead to legal challenges against them.[219] The Guardianrevealed that the NSA had been monitoring telephone conversations of 35 world leaders after being given the numbers by an official in another U.S. government department. A confidential memo revealed that the NSA encouraged senior officials in such Departments as theWhite House,StateandThe Pentagon, to share their "Rolodexes" so the agency could add the telephone numbers of leading foreign politicians to their surveillance systems. Reacting to the news, German leaderAngela Merkel, arriving inBrusselsfor anEU summit, accused the U.S. of a breach of trust, saying: "We need to have trust in our allies and partners, and this must now be established once again. I repeat that spying among friends is not at all acceptable against anyone, and that goes for every citizen in Germany."[217]The NSA collected in 2010 data on ordinary Americans' cellphone locations, but later discontinued it because it had no "operational value."[220] Under Britain'sMUSCULARprogramme, the NSA and GCHQ have secretly broken into the main communications links that connectYahooandGoogledata centersaround the world and thereby gained the ability to collect metadata andcontentat will from hundreds of millions of user accounts.[221][222][223][224] The mobile phone of German ChancellorAngela Merkelmight have been tapped by U.S. intelligence.[225][226][227][228]According to the Spiegel this monitoring goes back to 2002[229][230]and ended in the summer of 2013,[218]whileThe New York Timesreported that Germany has evidence that the NSA's surveillance of Merkel began duringGeorge W. Bush's tenure.[231]After learning fromDer Spiegelmagazine that the NSA has been listening in to her personal mobile phone, Merkel compared the snooping practices of the NSA with those of theStasi.[232]It was reported in March 2014, byDer Spiegelthat Merkel had also been placed on an NSA surveillance list alongside 122 other world leaders.[233] On October 31, 2013,Hans-Christian Ströbele, a member of theGerman Bundestagwho visited Snowden in Russia, reported on Snowden's willingness to provide details of the NSA's espionage program.[234] A highly sensitive signals intelligence collection program known asStateroominvolves the interception of radio, telecommunications and internet traffic. It is operated out of the diplomatic missions of theFive Eyes(Australia, Britain, Canada, New Zealand, United States) in numerous locations around the world. The program conducted at U.S. diplomatic missions is run in concert by the U.S. intelligence agencies NSA and CIA in a joint venture group called "Special Collection Service" (SCS), whose members work undercover in shielded areas of the American Embassies and Consulates, where they are officially accredited as diplomats and as such enjoy special privileges. Under diplomatic protection, they are able to look and listen unhindered. The SCS for example used the American Embassy near the Brandenburg Gate in Berlin to monitor communications in Germany's government district with its parliament and the seat of the government.[228][235][236][237] Under the Stateroom surveillance programme, Australia operates clandestine surveillance facilities to intercept phone calls and data across much of Asia.[236][238] In France, the NSA targeted people belonging to the worlds of business, politics or French state administration. The NSA monitored and recorded the content of telephone communications and the history of the connections of each target i.e. the metadata.[239][240]The actual surveillance operation was performed by French intelligence agencies on behalf of the NSA.[62][241]The cooperation between France and the NSA was confirmed by the Director of the NSA,Keith B. Alexander, who asserted that foreign intelligence services collected phone records in "war zones" and "other areas outside their borders" and provided them to the NSA.[242] The French newspaperLe Mondealso disclosed newPRISMand Upstream slides (See Page 4, 7 and 8) coming from the "PRISM/US-984XN Overview" presentation.[243] In Spain, the NSA intercepted the telephone conversations, text messages and emails of millions of Spaniards, and spied on members of the Spanish government.[244]Between December 10, 2012, and January 8, 2013, the NSA collected metadata on 60 million telephone calls in Spain.[245] According to documents leaked by Snowden, the surveillance of Spanish citizens was jointly conducted by the NSA and the intelligence agencies of Spain.[246][247] The New York Timesreported that the NSA carries out an eavesdropping effort, dubbed Operation Dreadnought, against the Iranian leaderAyatollah Ali Khamenei. During his 2009 visit toIranian Kurdistan, the agency collaborated with GCHQ and the U.S.'sNational Geospatial-Intelligence Agency, collecting radio transmissions between aircraft and airports, examining Khamenei's convoy with satellite imagery, and enumerating military radar stations. According to the story, an objective of the operation is "communications fingerprinting": the ability to distinguish Khamenei's communications from those of other people inIran.[248] The same story revealed an operation code-named Ironavenger, in which the NSA intercepted e-mails sent between a country allied with the United States and the government of "an adversary". The ally was conducting aspear-phishingattack: its e-mails containedmalware. The NSA gathered documents andlogincredentials belonging to the enemy country, along with knowledge of the ally's capabilities forattacking computers.[248] According to the British newspaperThe Independent, the British intelligence agency GCHQ maintains a listening post on the roof of theBritish Embassy in Berlinthat is capable of intercepting mobile phone calls, wi-fi data and long-distance communications all over the German capital, including adjacent government buildings such as theReichstag(seat of the German parliament) and theChancellery(seat of Germany's head of government) clustered around theBrandenburg Gate.[249] Operating under the code-name "Quantum Insert", GCHQ set up a fake website masquerading asLinkedIn, a social website used forprofessional networking, as part of its efforts to install surveillance software on the computers of the telecommunications operatorBelgacom.[250][251][252]In addition, the headquarters of the oil cartelOPECwere infiltrated by GCHQ as well as the NSA, which bugged the computers of nine OPEC employees and monitored theGeneral Secretary of OPEC.[250] For more than three years GCHQ has been using an automated monitoring system code-named "Royal Concierge" to infiltrate thereservation systemsof at least 350 prestigious hotels in many different parts of the world in order to target, search and analyze reservations to detect diplomats and government officials.[253]First tested in 2010, the aim of the "Royal Concierge" is to track down the travel plans of diplomats, and it is often supplemented with surveillance methods related tohuman intelligence(HUMINT). Other covert operations include the wiretapping of room telephones and fax machines used in targeted hotels as well as the monitoring of computers hooked up to the hotel network.[253] In November 2013, theAustralian Broadcasting CorporationandThe Guardianrevealed that theAustralian Signals Directorate(DSD) had attempted to listen to the private phone calls of thepresident of Indonesiaand his wife. The Indonesian foreign minister,Marty Natalegawa, confirmed that he and the president had contacted the ambassador in Canberra. Natalegawa said any tapping of Indonesian politicians' personal phones "violates every single decent and legal instrument I can think of—national in Indonesia, national in Australia, international as well".[254] Other high-ranking Indonesian politicians targeted by the DSD include: Carrying the title "3Gimpact and update", a classified presentation leaked by Snowden revealed the attempts of the ASD/DSD to keep up to pace with the rollout of 3G technology in Indonesia and across Southeast Asia. The ASD/DSD motto placed at the bottom of each page reads: "Reveal their secrets—protect our own."[255] Under a secret deal approved by British intelligence officials, the NSA has been storing and analyzing the internet and email records of British citizens since 2007. The NSA also proposed in 2005 a procedure for spying on the citizens of the UK and otherFive-Eyes nations alliance, even where the partner government has explicitly denied the U.S. permission to do so. Under the proposal, partner countries must neither be informed about this particular type of surveillance, nor the procedure of doing so.[37] Toward the end of November,The New York Timesreleased an internal NSA report outlining the agency's efforts to expand its surveillance abilities.[256]The five-page document asserts that thelaw of the United Stateshas not kept up with the needs of the NSA to conduct mass surveillance in the "golden age" ofsignals intelligence, but there are grounds for optimism because, in the NSA's own words: The culture of compliance, which has allowed the American people to entrust NSA with extraordinary authorities, will not be compromised in the face of so many demands, even as we aggressively pursue legal authorities...[257] The report, titled "SIGINTStrategy 2012–2016", also said that the U.S. will try to influence the "global commercial encryption market" through "commercial relationships", and emphasized the need to "revolutionize" the analysis of its vast data collection to "radically increase operational impact".[256] On November 23, 2013, the Dutch newspaperNRC Handelsbladreported that the Netherlands was targeted by U.S. intelligence agencies in the immediate aftermath ofWorld War II. This period of surveillance lasted from 1946 to 1968, and also included the interception of the communications of other European countries including Belgium, France, West Germany and Norway.[258]The Dutch Newspaper also reported that NSA infected more than 50,000 computer networks worldwide, often covertly, with malicious spy software, sometimes in cooperation with local authorities, designed to steal sensitive information.[40][259] According to the classified documents leaked by Snowden, theAustralian Signals Directorate(ASD), formerly known as the Defense Signals Directorate, had offered to share intelligence information it had collected with the other intelligence agencies of theUKUSA Agreement. Data shared with foreign countries include "bulk, unselected, unminimized metadata" it had collected. The ASD provided such information on the condition that no Australian citizens were targeted. At the time the ASD assessed that "unintentional collection [of metadata of Australian nationals] is not viewed as a significant issue". If a target was later identified as being an Australian national, the ASD was required to be contacted to ensure that a warrant could be sought. Consideration was given as to whether "medical, legal or religious information" would be automatically treated differently to other types of data, however a decision was made that each agency would make such determinations on a case-by-case basis.[260]Leaked material does not specify where the ASD had collected the intelligence information from, however Section 7(a) of the Intelligence Services Act 2001 (Commonwealth) states that the ASD's role is "...to obtain intelligence about the capabilities, intentions or activities of people or organizations outside Australia...".[261]As such, it is possible ASD's metadata intelligence holdings was focused on foreign intelligence collection and was within the bounds of Australian law. The Washington Postrevealed that the NSA has been tracking the locations of mobile phones from all over the world by tapping into the cables that connect mobile networks globally and that serve U.S. cellphones as well as foreign ones. In the process of doing so, the NSA collects more than five billion records of phone locations on a daily basis. This enables NSA analysts to map cellphone owners' relationships by correlating their patterns of movement over time with thousands or millions of other phone users who cross their paths.[262][263][264][265] The Washington Post also reported that both GCHQ and the NSA make use of location data and advertising tracking files generated through normal internet browsing (withcookiesoperated by Google, known as "Pref") to pinpoint targets for government hacking and to bolster surveillance.[266][267][268] TheNorwegian Intelligence Service(NIS), which cooperates with the NSA, has gained access to Russian targets in theKola Peninsulaand other civilian targets. In general, the NIS provides information to the NSA about "Politicians", "Energy" and "Armament".[269]Atop secretmemo of the NSA lists the following years as milestones of the Norway–United States of America SIGINT agreement, orNORUS Agreement: The NSA considers the NIS to be one of its most reliable partners. Both agencies also cooperate to crack the encryption systems of mutual targets. According to the NSA, Norway has made no objections to its requests from the NIS.[270] On December 5,Sveriges Televisionreported theNational Defense Radio Establishment(FRA) has been conducting a clandestine surveillance operation in Sweden, targeting the internal politics of Russia. The operation was conducted on behalf of the NSA, receiving data handed over to it by the FRA.[271][272]The Swedish-American surveillance operation also targeted Russian energy interests as well as theBaltic states.[273]As part of theUKUSA Agreement, a secret treaty was signed in 1954 by Sweden with the United States, the United Kingdom, Canada, Australia and New Zealand, regarding collaboration and intelligence sharing.[274] As a result of Snowden's disclosures, the notion ofSwedish neutralityin international politics was called into question.[275]In an internal document dating from the year 2006, the NSA acknowledged that its "relationship" with Sweden is "protected at the TOP SECRET level because of that nation'spolitical neutrality."[276]Specific details of Sweden's cooperation with members of the UKUSA Agreement include: According to documents leaked by Snowden, theSpecial Source Operationsof the NSA has been sharing information containing "logins, cookies, and GooglePREFID" with theTailored Access Operationsdivision of the NSA, as well as Britain's GCHQ agency.[284] During the2010 G-20 Toronto summit, theU.S. embassy in Ottawawas transformed into a security command post during a six-day spying operation that was conducted by the NSA and closely coordinated with theCommunications Security Establishment Canada(CSEC). The goal of the spying operation was, among others, to obtain information on international development, banking reform, and to counter trade protectionism to support "U.S. policy goals."[285]On behalf of the NSA, the CSEC has set up covert spying posts in 20 countries around the world.[8] In Italy theSpecial Collection Serviceof the NSA maintains two separate surveillance posts in Rome andMilan.[286]According to a secret NSA memo dated September 2010, theItalian embassy in Washington, D.C.has been targeted by two spy operations of the NSA: Due to concerns that terrorist or criminal networks may be secretly communicating via computer games, the NSA, GCHQ, CIA, and FBI have been conducting surveillance and scooping up data from the networks of many online games, includingmassively multiplayer online role-playing games(MMORPGs) such asWorld of Warcraft, as well asvirtual worldssuch asSecond Life, and theXboxgaming console.[287][288][289][290] The NSA has cracked the most commonly used cellphone encryption technology,A5/1. According to a classified document leaked by Snowden, the agency can "process encrypted A5/1" even when it has not acquired an encryption key.[291]In addition, the NSA uses various types of cellphone infrastructure, such as the links between carrier networks, to determine the location of a cellphone user tracked byVisitor Location Registers.[292] US district court judge for the District of Columbia, Richard Leon,declared[293][294][295][296]on December 16, 2013, that the mass collection of metadata of Americans' telephone records by the National Security Agency probably violates thefourth amendmentprohibition of unreasonablesearches and seizures.[297]Leon granted the request for a preliminary injunction that blocks the collection of phone data for two private plaintiffs (Larry Klayman, a conservative lawyer, and Charles Strange, father of a cryptologist killed in Afghanistan when his helicopter was shot down in 2011)[298]and ordered the government to destroy any of their records that have been gathered. But the judge stayed action on his ruling pending a government appeal, recognizing in his 68-page opinion the "significant national security interests at stake in this case and the novelty of the constitutional issues."[297] However federal judge William H. Pauley III in New York Cityruled[299]the U.S. government's global telephone data-gathering system is needed to thwart potential terrorist attacks, and that it can only work if everyone's calls are swept in. U.S. District Judge Pauley also ruled that Congress legally set up the program and that it does not violate anyone's constitutional rights. The judge also concluded that the telephone data being swept up by NSA did not belong to telephone users, but to the telephone companies. He further ruled that when NSA obtains such data from the telephone companies, and then probes into it to find links between callers and potential terrorists, this further use of the data was not even a search under the Fourth Amendment. He also concluded that the controlling precedent isSmith v. Maryland: "Smith's bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties," Judge Pauley wrote.[300][301][302][303]The American Civil Liberties Union declared on January 2, 2012, that it will appeal Judge Pauley's ruling that NSA bulk the phone record collection is legal. "The government has a legitimate interest in tracking the associations of suspected terrorists, but tracking those associations does not require the government to subject every citizen to permanent surveillance," deputy ACLU legal director Jameel Jaffer said in a statement.[304] In recent years, American and British intelligence agencies conducted surveillance on more than 1,100 targets, including the office of an Israeli prime minister, heads of international aid organizations, foreign energy companies and a European Union official involved in antitrust battles with American technology businesses.[305] Acatalog of high-tech gadgets and software developed by the NSA'sTailored Access Operations(TAO) was leaked by the German news magazineDer Spiegel.[306]Dating from 2008, the catalog revealed the existence of special gadgets modified to capture computerscreenshotsandUSB flash drivessecretly fitted with radio transmitters to broadcast stolen data over the airwaves, and fake base stations intended to intercept mobile phone signals, as well as many other secret devices and software implants listed here: The Tailored Access Operations (TAO) division of the NSA intercepted the shipping deliveries of computers and laptops in order to install spyware and physical implants on electronic gadgets. This was done in close cooperation with the FBI and the CIA.[306][307][308][309]NSA officials responded to the Spiegel reports with a statement, which said: "Tailored Access Operations is a unique national asset that is on the front lines of enabling NSA to defend the nation and its allies. [TAO's] work is centred on computer network exploitation in support of foreign intelligence collection."[310] In a separate disclosure unrelated to Snowden, the FrenchTrésor public, which runs acertificate authority, was found to have issued fake certificates impersonatingGooglein order to facilitate spying on French government employees viaman-in-the-middle attacks.[311] The NSA is working to build a powerfulquantum computercapable of breaking all types of encryption.[314][315][316][317][318]The effort is part of a US$79.7 million research program known as "Penetrating Hard Targets". It involves extensive research carried out in large, shielded rooms known asFaraday cages, which are designed to preventelectromagnetic radiationfrom entering or leaving.[315]Currently, the NSA is close to producing basic building blocks that will allow the agency to gain "complete quantum control on twosemiconductorqubits".[315]Once a quantum computer is successfully built, it would enable the NSA to unlock the encryption that protects data held by banks, credit card companies, retailers, brokerages, governments and health care providers.[314] According toThe New York Times, the NSA is monitoring approximately 100,000 computers worldwide with spy software named Quantum. Quantum enables the NSA to conduct surveillance on those computers on the one hand, and can also create a digital highway for launching cyberattacks on the other hand. Among the targets are the Chinese and Russian military, but also trade institutions within the European Union. The NYT also reported that the NSA can access and alter computers which are not connected with the internet by a secret technology in use by the NSA since 2008. The prerequisite is the physical insertion of the radio frequency hardware by a spy, a manufacturer or an unwitting user. The technology relies on a covert channel of radio waves that can be transmitted from tiny circuit boards and USB cards inserted surreptitiously into the computers. In some cases, they are sent to a briefcase-size relay station that intelligence agencies can set up miles away from the target. The technology can also transmit malware back to the infected computer.[40] Channel 4andThe Guardianrevealed the existence ofDishfire, a massivedatabaseof the NSA that collects hundreds of millions of text messages on a daily basis.[319]GCHQ has been given full access to the database, which it uses to obtain personal information of Britons by exploiting a legal loophole.[320] Each day, the database receives and stores the following amounts of data: The database is supplemented with an analytical tool known as the Prefer program, which processes SMS messages to extract other types of information including contacts frommissed callalerts.[321] ThePrivacy and Civil Liberties Oversight Board report on mass surveillancewas released on January 23, 2014. It recommends to end the bulk telephone metadata, i.e., bulk phone records – phone numbers dialed, call times and durations, but not call content collection – collection program, to create a "Special Advocate" to be involved in some cases before the FISA court judge and to release future and past FISC decisions "that involve novel interpretations of FISA or other significant questions of law, technology or compliance."[322][323][324] According to a joint disclosure byThe New York Times,The Guardian, andProPublica,[325][326][327][328]the NSA and GCHQ have begun working together to collect and store data from dozens ofsmartphoneapplication softwareby 2007 at the latest. A 2008 GCHQ report, leaked by Snowden asserts that "anyone usingGoogle Mapson a smartphone is working in support of a GCHQ system". The NSA and GCHQ have traded recipes for various purposes such as grabbing location data and journey plans that are made when a target usesGoogle Maps, and vacuuming upaddress books,buddy lists,phone logsand geographic data embedded in photos posted on the mobile versions of numerous social networks such as Facebook,Flickr,LinkedIn, Twitter, and other services. In a separate 20-page report dated 2012, GCHQ cited the popular smartphone game "Angry Birds" as an example of how an application could be used to extract user data. Taken together, such forms of data collection would allow the agencies to collect vital information about a user's life, including his or her home country, current location (throughgeolocation), age, gender,ZIP code,marital status, income,ethnicity,sexual orientation, education level, number of children, etc.[329][330] A GCHQ document dated August 2012 provided details of theSqueaky Dolphinsurveillance program, which enables GCHQ to conduct broad,real-timemonitoring of varioussocial mediafeatures and social media traffic such as YouTube video views, theLike buttonon Facebook, andBlogspot/Bloggervisits without the knowledge or consent of the companies providing those social media features. The agency's "Squeaky Dolphin" program can collect, analyze and utilize YouTube, Facebook and Blogger data in specific situations in real time for analysis purposes. The program also collects the addresses from the billions of videos watched daily as well as some user information for analysis purposes.[331][332][333] During the2009 United Nations Climate Change Conferencein Copenhagen, the NSA and itsFive Eyespartners monitored the communications of delegates of numerous countries. This was done to give their own policymakers a negotiating advantage.[334][335] TheCommunications Security Establishment Canada(CSEC) has been tracking Canadian air passengers via freeWi-Fiservices at a major Canadian airport. Passengers who exited the airport terminal continued to be tracked as they showed up at otherWi-Filocations across Canada. In a CSEC document dated May 2012, the agency described how it had gained access to two communications systems with over 300,000 users in order to pinpoint a specific imaginary target. The operation was executed on behalf of the NSA as a trial run to test a new technology capable of tracking down "any target that makes occasional forays into other cities/regions." This technology was subsequently shared with Canada'sFive Eyespartners – Australia, New Zealand, Britain, and the United States.[336][337][338][339] According to research bySüddeutsche Zeitungand TV networkNDRthe mobile phone of former German chancellorGerhard Schröderwas monitored from 2002 onward, reportedly because of his government's opposition tomilitary intervention in Iraq. The source of the latest information is a document leaked byEdward Snowden. The document, containing information about the National Sigint Requirement List (NSRL), had previously been interpreted as referring only toAngela Merkel's mobile. However,Süddeutsche Zeitungand NDR claim to have confirmation from NSA insiders that the surveillance authorisation pertains not to the individual, but the political post – which in 2002 was still held by Schröder. According to research by the two media outlets, Schröder was placed as number 388 on the list, which contains the names of persons and institutions to be put under surveillance by the NSA.[340][341][342][343] GCHQ launched acyber-attackon the activist network "Anonymous", usingdenial-of-service attack(DoS) to shut down a chatroom frequented by the network's members and to spy on them. The attack, dubbed Rolling Thunder, was conducted by a GCHQ unit known as theJoint Threat Research Intelligence Group(JTRIG). The unit successfully uncovered the true identities of several Anonymous members.[344][345][346][347] The NSA Section 215 bulk telephony metadata program which seeks to stockpile records on all calls made in the U.S. is collecting less than 30 percent of all Americans' call records because of an inability to keep pace with the explosion in cellphone use, according toThe Washington Post. The controversial program permits the NSA after a warrant granted by the secret Foreign Intelligence Surveillance Court to record numbers, length and location of every call from the participating carriers.[348][349] The Interceptreported that the U.S. government is using primarily NSA surveillance to target people for drone strikes overseas. In its reportThe Interceptauthor detail the flawed methods which are used to locate targets for lethal drone strikes, resulting in the deaths of innocent people.[350]According to the Washington Post NSA analysts and collectors i.e. NSA personnel which controls electronic surveillance equipment use the NSA's sophisticated surveillance capabilities to track individual targets geographically and in real time, while drones and tactical units aimed their weaponry against those targets to take them out.[351] An unnamed US law firm, reported to beMayer Brown, was targeted by Australia'sASD. According to Snowden's documents, the ASD had offered to hand over these intercepted communications to the NSA. This allowed government authorities to be "able to continue to cover the talks, providing highly useful intelligence for interested US customers".[352][353] NSA and GCHQ documents revealed that the anti-secrecy organizationWikiLeaksand otheractivist groupswere targeted for government surveillance and criminal prosecution. In particular, theIP addressesof visitors to WikiLeaks were collected in real time, and the US government urged its allies to file criminal charges against the founder of WikiLeaks,Julian Assange, due to his organization's publication of theAfghanistan war logs. The WikiLeaks organization was designated as a "malicious foreign actor".[354] Quoting an unnamed NSA official in Germany,Bild am Sonntagreported that while President Obama's order to stop spying on Merkel was being obeyed, the focus had shifted to bugging other leading government and business figures including Interior MinisterThomas de Maiziere, a close confidant of Merkel. Caitlin Hayden, a security adviser to President Obama, was quoted in the newspaper report as saying, "The US has made clear it gathers intelligence in exactly the same way as any other states."[355][356] The Interceptreveals that government agencies are infiltrating online communities and engaging in "false flag operations" to discredit targets among them people who have nothing to do with terrorism or national security threats. The two main tactics that are currently used are the injection of all sorts of false material onto the internet in order to destroy the reputation of its targets; and the use of social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable.[357][358][359][360] The Guardian reported that Britain's surveillance agency GCHQ, with aid from the National Security Agency, intercepted and stored the webcam images of millions of internet users not suspected of wrongdoing. The surveillance program codenamedOptic Nervecollected still images of Yahoo webcam chats (one image every five minutes) in bulk and saved them to agency databases. The agency discovered "that a surprising number of people use webcam conversations to show intimate parts of their body to the other person", estimating that between 3% and 11% of the Yahoo webcam imagery harvested by GCHQ contains "undesirable nudity".[361] The NSA has built an infrastructure which enables it to covertly hack into computers on a mass scale by using automated systems that reduce the level of human oversight in the process. The NSA relies on an automated system codenamedTURBINEwhich in essence enables the automated management and control of a large network of implants (a form of remotely transmitted malware on selected individual computer devices or in bulk on tens of thousands of devices). As quoted byThe Intercept, TURBINE is designed to "allow the current implant network to scale to large size (millions of implants) by creating a system that does automated control implants by groups instead of individually."[362]The NSA has shared many of its files on the use of implants with its counterparts in the so-called Five Eyes surveillance alliance – the United Kingdom, Canada, New Zealand, and Australia. Among other things due to TURBINE and its control over the implants the NSA is capable of: The TURBINE implants are linked to, and relies upon, a large network of clandestine surveillance "sensors" that the NSA has installed at locations across the world, including the agency's headquarters in Maryland and eavesdropping bases used by the agency in Misawa, Japan and Menwith Hill, England. Codenamed as TURMOIL, the sensors operate as a sort of high-tech surveillance dragnet, monitoring packets of data as they are sent across the Internet. When TURBINE implants exfiltrate data from infected computer systems, the TURMOIL sensors automatically identify the data and return it to the NSA for analysis. And when targets are communicating, the TURMOIL system can be used to send alerts or "tips" to TURBINE, enabling the initiation of a malware attack. To identify surveillance targets, the NSA uses a series of data "selectors" as they flow across Internet cables. These selectors can include email addresses, IP addresses, or the unique "cookies" containing a username or other identifying information that are sent to a user's computer by websites such as Google, Facebook, Hotmail, Yahoo, and Twitter, unique Google advertising cookies that track browsing habits, unique encryption key fingerprints that can be traced to a specific user, and computer IDs that are sent across the Internet when a Windows computer crashes or updates.[363][364][365][366] The CIA was accused by U.S. Senate Intelligence Committee Chairwoman Dianne Feinstein of spying on a stand-alone computer network established for the committee in its investigation of allegations of CIA abuse in a George W. Bush-era detention and interrogation program.[367] A voice interception program codenamedMYSTICbegan in 2009. Along with RETRO, short for "retrospective retrieval" (RETRO is voice audio recording buffer that allows retrieval of captured content up to 30 days into the past), the MYSTIC program is capable of recording "100 percent" of a foreign country's telephone calls, enabling the NSA to rewind and review conversations up to 30 days and the relating metadata. With the capability to store up to 30 days of recorded conversations MYSTIC enables the NSA to pull an instant history of the person's movements, associates and plans.[368][369][370][371][372][373] On March 21,Le Mondepublished slides from an internal presentation of theCommunications Security Establishment Canada, which attributed a piece of malicious software to French intelligence. The CSEC presentation concluded that the list of malware victims matched French intelligence priorities and found French cultural reference in the malware's code, including the nameBabar, a popular French children's character, and the developer name "Titi".[374] The French telecommunications corporationOrange S.A.shares its call data with the French intelligence agency DGSE, which hands over the intercepted data to GCHQ.[375] The NSA has spied on the Chinese technology companyHuawei.[376][377][378]Huawei is a leading manufacturer of smartphones, tablets, mobile phone infrastructure, and WLAN routers and installs fiber optic cable. According toDer Spiegelthis "kind of technology ... is decisive in the NSA's battle for data supremacy."[379]The NSA, in an operation named "Shotgiant", was able to access Huawei's email archive and the source code for Huawei's communications products.[379]The US government has had longstanding concerns that Huawei may not be independent of thePeople's Liberation Armyand that the Chinese government might use equipment manufactured by Huawei to conduct cyberespionage or cyberwarfare. The goals of the NSA operation were to assess the relationship between Huawei and the PLA, to learn more the Chinese government's plans and to use information from Huawei to spy on Huawei's customers, including Iran, Afghanistan, Pakistan, Kenya, and Cuba. Former Chinese PresidentHu Jintao, the Chinese Trade Ministry, banks, as well as telecommunications companies were also targeted by the NSA.[376][379] The Interceptpublished a document of an NSA employee discussing how to build a database of IP addresses, webmail, and Facebook accounts associated with system administrators so that the NSA can gain access to the networks and systems they administer.[380][381] At the end of March 2014,Der SpiegelandThe Interceptpublished, based on a series of classified files from the archive provided to reporters by NSA whistleblower Edward Snowden, articles related to espionage efforts by GCHQ and NSA in Germany.[382][383]The British GCHQ targeted three German internet firms for information about Internet traffic passing through internet exchange points, important customers of the German internet providers, their technology suppliers as well as future technical trends in their business sector and company employees.[382][383]The NSA was granted by theForeign Intelligence Surveillance Courtthe authority for blanket surveillance of Germany, its people and institutions, regardless whether those affected are suspected of having committed an offense or not, without an individualized court order specifying on March 7, 2013.[383]In addition Germany's chancellor Angela Merkel was listed in a surveillance search machine and database named Nymrod along with 121 others foreign leaders.[382][383]AsThe Interceptwrote: "The NSA uses the Nymrod system to 'find information relating to targets that would otherwise be tough to track down,' according to internal NSA documents. Nymrod sifts through secret reports based on intercepted communications as well as full transcripts of faxes, phone calls, and communications collected from computer systems. More than 300 'cites' for Merkel are listed as available in intelligence reports and transcripts for NSA operatives to read."[382] Toward the end of April, Edward Snowden said that the United States surveillance agencies spy on Americans more than anyone else in the world, contrary to anything that has been said by the government up until this point.[384] An article published by Ars Technica shows NSA's Tailored Access Operations (TAO) employees intercepting a Cisco router.[385] The InterceptandWikiLeaksrevealed information about which countries were having their communications collected as part of theMYSTICsurveillance program. On May 19,The Interceptreported that the NSA is recording and archiving nearly every cell phone conversation in the Bahamas with a system called SOMALGET, a subprogram ofMYSTIC. The mass surveillance has been occurring without the Bahamian government's permission.[386]Aside from the Bahamas,The Interceptreported NSA interception of cell phone metadata inKenya, thePhilippines,Mexico, and a fifth country it did not name due to "credible concerns that doing so could lead to increased violence." WikiLeaks released a statement on May 23 claiming thatAfghanistanwas the unnamed nation.[387] In a statement responding to the revelations, the NSA said "the implication that NSA's foreign intelligence collection is arbitrary and unconstrained is false."[386] Through its global surveillance operations the NSA exploits the flood of images included in emails, text messages, social media, videoconferences and other communications to harvest millions of images. These images are then used by the NSA in sophisticatedfacial recognition programsto track suspected terrorists and other intelligence targets.[388] Vodafonerevealed that there were secret wires that allowed government agencies direct access to their networks.[389]This access does not require warrants and the direct access wire is often equipment in a locked room.[389]In six countries where Vodafone operates, the law requires telecommunication companies to install such access or allows governments to do so.[389]Vodafone did not name these countries in case some governments retaliated by imprisoning their staff.[389]Shami ChakrabartiofLibertysaid "For governments to access phone calls at the flick of a switch is unprecedented and terrifying. Snowden revealed the internet was already treated as fair game. Bluster that all is well is wearing pretty thin – our analogue laws need a digital overhaul."[389]Vodafone published its first Law Enforcement Disclosure Report on June 6, 2014.[389]Vodafone group privacy officer Stephen Deadman said "These pipes exist, the direct access model exists. We are making a call to end direct access as a means of government agencies obtaining people's communication data. Without an official warrant, there is no external visibility. If we receive a demand we can push back against the agency. The fact that a government has to issue a piece of paper is an important constraint on how powers are used."[389]Gus Hosein, director ofPrivacy Internationalsaid "I never thought the telcos would be so complicit. It's a brave step by Vodafone and hopefully the other telcos will become more brave with disclosure, but what we need is for them to be braver about fighting back against the illegal requests and the laws themselves."[389] Above-top-secretdocumentation of a covert surveillance program named Overseas Processing Centre 1 (OPC-1) (codenamed "CIRCUIT") byGCHQwas published byThe Register. Based on documents leaked by Edward Snowden, GCHQ taps into undersea fiber optic cables via secret spy bases near theStrait of Hormuzand Yemen.BTandVodafoneare implicated.[390] The Danish newspaperDagbladet InformationandThe Interceptrevealed on June 19, 2014, the NSA mass surveillance program codenamedRAMPART-A. Under RAMPART-A, 'third party' countries tap into fiber optic cables carrying the majority of the world's electronic communications and are secretly allowing the NSA to install surveillance equipment on these fiber-optic cables. The foreign partners of the NSA turn massive amounts of data like the content of phone calls, faxes, e-mails, internet chats, data from virtual private networks, and calls made using Voice over IP software like Skype over to the NSA. In return these partners receive access to the NSA's sophisticated surveillance equipment so that they too can spy on the mass of data that flows in and out of their territory. Among the partners participating in the NSA mass surveillance program are Denmark and Germany.[391][392][393] During the week of July 4, a 31-year-old male employee ofGermany's intelligence serviceBNDwas arrested on suspicion ofspyingfor theUnited States. The employee is suspected of spying on theGerman Parliamentary Committee investigating the NSA spying scandal.[394] Former NSA official and whistleblowerWilliam Binneyspoke at aCentre for Investigative Journalismconference in London. According to Binney, "at least 80% of all audio calls, not just metadata, are recorded and stored in the US. The NSA lies about what it stores." He also stated that the majority offiber optic cablesrun through the U.S., which "is no accident and allows the US to view all communication coming in."[395] The Washington Postreleased a review of a cache provided by Snowden containing roughly 160,000 text messages and e-mails intercepted by the NSA between 2009 and 2012. The newspaper concluded that nine out of ten account holders whose conversations were recorded by the agency "were not the intended surveillance targets but were caught in a net the agency had cast for somebody else." In its analysis,The Postalso noted that many of the account holders were Americans.[396] On July 9, a soldier working withinGermany's Federal Ministry of Defence(BMVg) fell under suspicion of spying for the United States.[397]As a result of the July 4 case and this one, the German government expelled the CIA station chief in Germany on July 17.[398] On July 18, former State Department officialJohn Tyereleased an editorial inThe Washington Post, highlighting concerns over data collection underExecutive Order 12333. Tye's concerns are rooted in classified material he had access to through the State Department, though he has not publicly released any classified materials.[399] The Interceptreported that the NSA is "secretly providing data to nearly two dozen U.S. government agencies with a 'Google-like' search engine" called ICREACH. The database,The Interceptreported, is accessible to domestic law enforcement agencies including the FBI and theDrug Enforcement Administrationand was built to contain more than 850 billion metadata records about phone calls, emails, cellphone locations, and text messages.[400][401] Based on documents obtained from Snowden,The Interceptreported that theNSAandGCHQhad broken into the internal computer network ofGemaltoand stolen the encryption keys that are used inSIM cardsno later than 2010. As of 2015[update], the company is the world's largest manufacturer of SIM cards, making about two billion cards a year. With the keys, the intelligence agencies could eavesdrop on cell phones without the knowledge of mobile phone operators or foreign governments.[402] The New Zealand Herald, in partnership withThe Intercept, revealed that the New Zealand government used XKeyscore to spy on candidates for the position ofWorld Trade Organizationdirector general[403]and also members of theSolomon Islandsgovernment.[404] In January 2015, theDEArevealed that it had been collecting metadata records for all telephone calls made by Americans to 116 countries linked to drug trafficking. The DEA's program was separate from the telephony metadata programs run by the NSA.[405]In April,USA Todayreported that the DEA's data collection program began in 1992 and included all telephone calls between the United States and from Canada and Mexico. Current and former DEA officials described the program as the precursor of the NSA's similar programs.[406]The DEA said its program was suspended in September 2013, after a review of the NSA's programs and that it was "ultimately terminated."[405] Snowden provided journalists atThe Interceptwith GCHQ documents regarding another secret program "Karma Police", calling itself "the world's biggest" data mining operation, formed to create profiles on every visibleInternet user'sbrowsing habits. By 2009 it had stored over 1.1 trillion web browsing sessions, and by 2012 was recording 50 billion sessions per day.[407] In March 2017,WikiLeakspublished more than 8,000 documents on theCIA. The confidential documents, codenamedVault 7, dated from 2013 to 2016, included details on the CIA's hacking capabilities, such as the ability to compromisecars,smart TVs,[411]web browsers(includingGoogle Chrome,Microsoft Edge,Firefox, andOpera),[412][413]and the operating systems of mostsmartphones(includingApple'siOSandGoogle'sAndroid), as well as otheroperating systemssuch asMicrosoft Windows,macOS, andLinux.[414]WikiLeaks did not name the source, but said that the files had "circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive."[411] The disclosure provided impetus for the creation ofsocial movementsagainst mass surveillance, such asRestore the Fourth, and actions likeStop Watching UsandThe Day We Fight Back. On the legal front, theElectronic Frontier Foundationjoined acoalitionof diverse groups filing suit against the NSA. Severalhuman rights organizationsurged theObama administrationnot to prosecute, but protect, "whistleblowerSnowden":Amnesty International,Human Rights Watch,Transparency International, and theIndex on Censorship, among others.[419][420][421][422]On the economic front, several consumer surveys registered a drop in online shopping and banking activity as a result of the Snowden revelations.[423] However, it is argued long-term impact among the general population is negligible, as "the general public has still failed to adopt privacy-enhancing tools en masse."[424]A research study that tracked the interest in privacy-related webpages following the incident found that the public's interest reduced quickly, despite continuous discussion by the media about the events.[425] Domestically, PresidentBarack Obamaclaimed that there is "no spying on Americans",[426][427]andWhite HousePress SecretaryJay Carneyasserted that the surveillance programs revealed by Snowden have been authorized by Congress.[428] On the international front, U.S. Attorney GeneralEric Holderstated that "we cannot target even foreign persons overseas without a valid foreign intelligence purpose."[429] Prime MinisterDavid Cameronwarned journalists that "if they don't demonstrate some social responsibility it will be very difficult for government to stand back and not to act."[430]Deputy Prime MinisterNick Cleggemphasized that the media should "absolutely defend the principle of secrecy for the intelligence agencies".[431] Foreign SecretaryWilliam Hagueclaimed that "we take great care to balance individual privacy with our duty to safeguard the public and UK national security."[432]Hague defended theFive Eyesalliance and reiterated that the British-U.S. intelligence relationship must not be endangered because it "saved many lives".[433] Former Prime MinisterTony Abbottstated that "every Australian governmental agency, every Australian official at home and abroad, operates in accordance with the law".[434]Abbott criticized theAustralian Broadcasting Corporationfor being unpatriotic due to its reporting on the documents provided by Snowden, whom Abbott described as a "traitor".[435][436]Foreign MinisterJulie Bishopalso denounced Snowden as a traitor and accused him of "unprecedented" treachery.[437]Bishop defended theFive Eyesalliance and reiterated that the Australian–U.S. intelligence relationship must not be endangered because it "saves lives".[438] Chinese policymakers became increasingly concerned about the risk of cyberattacks following the disclosures, which demonstrated extensiveUnited States intelligence activities in China.[439]: 129As part of its response, theCommunist Partyin 2014 formed the Cybersecurity and InformationLeading Group.[439]: 129 In July 2013, ChancellorAngela Merkeldefended the surveillance practices of the NSA, and described the United States as "our truest ally throughout the decades".[440][441]After the NSA's surveillance on Merkel was revealed, however, the Chancellor compared the NSA with theStasi.[442]According toThe Guardian,Berlinis using the controversy over NSA spying as leverage to enter the exclusiveFive Eyesalliance.[443] Interior MinisterHans-Peter Friedrichstated that "the Americans take ourdata privacyconcerns seriously."[444]Testifying before theGerman Parliament, Friedrich defended the NSA's surveillance, and cited five terrorist plots on German soil that were prevented because of the NSA.[445]However, in April 2014, another German interior minister criticized the United States for failing to provide sufficient assurances to Germany that it had reined in its spying tactics.Thomas de Maiziere, a close ally of Merkel, toldDer Spiegel: "U.S. intelligence methods may be justified to a large extent by security needs, but the tactics are excessive and over-the-top."[446] Minister for Foreign AffairsCarl Bildt, defended theFRAand described its surveillance practices as a "national necessity".[447]Minister for DefenceKarin Enströmsaid that Sweden's intelligence exchange with other countries is "critical for our security" and that "intelligence operations occur within a framework with clear legislation, strict controls and under parliamentary oversight."[448][449] Interior MinisterRonald Plasterkapologized for incorrectly claiming that the NSA had collected 1.8 million records of metadata in the Netherlands. Plasterk acknowledged that it was in fact Dutch intelligence services who collected the records and transferred them to the NSA.[450][451] The Danish Prime MinisterHelle Thorning-Schmidthas praised the American intelligence agencies, claiming they have prevented terrorist attacks in Denmark, and expressed her personal belief that the Danish people "should be grateful" for the Americans' surveillance.[452]She has later claimed that the Danish authorities have no basis for assuming that American intelligence agencies have performed illegal spying activities toward Denmark or Danish interests.[453] In July 2013, the German government announced an extensive review of German intelligence services.[454][455] In August 2013, the U.S. government announced an extensive review of U.S. intelligence services.[456][457] In October 2013, the British government announced an extensive review of British intelligence services.[458] In December 2013, the Canadian government announced an extensive review of Canadian intelligence services.[459] In January 2014, U.S. PresidentBarack Obamasaid that "the sensational way in which these disclosures have come out has often shed more heat than light"[19]and critics such asSean Wilentzclaimed that "the NSA has acted far more responsibly than the claims made by the leakers and publicized by the press." In Wilentz' view "The leakers have gone far beyond justifiably blowing the whistle on abusive programs. In addition to their alarmism about [U.S.] domestic surveillance, many of the Snowden documents released thus far have had nothing whatsoever to do with domestic surveillance."[20]Edward Lucas, former Moscow bureau chief forThe Economist, agreed, asserting that "Snowden's revelations neatly and suspiciously fits the interests of one country: Russia" and citingMasha Gessen's statement that "The Russian propaganda machine has not gotten this much mileage out of a US citizen sinceAngela Davis's murder trial in 1971."[460] Bob Cescaobjected toThe New York Timesfailing to redact the name of an NSA employee and the specific location where anal Qaedagroup was being targeted in a series of slides the paper made publicly available.[461] Russian journalistAndrei Soldatovargued that Snowden's revelations had had negative consequences for internet freedom in Russia, as Russian authorities increased their own surveillance and regulation on the pretext of protecting the privacy of Russian users. Snowden's name was invoked by Russian legislators who supported measures forcing platforms such asGoogle,Facebook,Twitter,Gmail, andYouTubeto locate their servers on Russian soil or installSORMblack boxes on their servers so that Russian authorities could control them.[462]Soldatov also contended that as a result of the disclosures, international support for having national governments take over the powers of the organizations involved in coordinating the Internet's global architectures had grown, which could lead to a Balkanization of the Internet that restricted free access to information.[463]TheMontevideo Statement on the Future of Internet Cooperationissued in October 2013, byICANNand other organizations warned against "Internet fragmentation at a national level" and expressed "strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations".[464] In late 2014,Freedom Housesaid "[s]ome states are using the revelations of widespread surveillance by the U.S. National Security Agency (NSA) as an excuse to augment their own monitoring capabilities, frequently with little or no oversight, and often aimed at the political opposition and human rights activists."[465] The material consisted of:
https://en.wikipedia.org/wiki/2010s_global_surveillance_disclosures
Theglobal surveillance disclosurereleased to media byEdward Snowdenhas caused tension in thebilateral relationsof theUnited Stateswith several of its allies and economic partners as well as in its relationship with theEuropean Union. In August 2013,U.S. PresidentBarack Obamaannounced the creation of "a review group on intelligence and communications technologies" that would brief and later report to him.[1]In December, the task force issued 46 recommendations that, if adopted, would subject theNational Security Agency(NSA) to additional scrutiny by the courts, Congress, and the president, and would strip the NSA of the authority to infiltrate American computer systems using "backdoors" in hardware or software.[2]Geoffrey R. Stone, a White House panel member, said there was no evidence that the bulk collection of phone data had stopped anyterror attacks.[3] U.S. ArmyGeneralKeith B. Alexander, then director of the NSA, said in June 2013, "These leaks have caused significant and irreversible damage to our nation's security." He added that "the irresponsible release of classified information about these programs will have a long-term detrimental impact on the intelligence community's ability to detect future attacks."[4] In June 2014, Alexander's recently installed successor as the NSA's director,U.S. NavyAdmiralMichael S. Rogers, said that while some terrorist groups had altered their communications to avoid surveillance techniques revealed by Snowden, the damage done overall did not lead him to conclude that "the sky is falling." Conceding there was no absolute protection against leaks by a dedicated insider with access to the agency's networks, Rogers said the NSA must nevertheless "ensure that the volume" of data taken by Snowden "can't be stolen again."[5] Shortly after the disclosures were published, President Obama asserted that the American public had no cause for concern because "nobody is listening to your telephone calls",[6]and "there is no spying on Americans".[7] On June 21, 2013, theDirector of National IntelligenceJames R. Clapperissued an apology for giving erroneous testimony under oath to theUnited States Congress. Earlier in March that year, Clapper was asked bySenatorRon Wydento clarify the alleged surveillance of U.S. citizens by the NSA: SenatorWyden:"Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?"DirectorClapper:"No, Sir."[8] In an interview shortly after Snowden's disclosures were first published, Clapper stated that he had misunderstood Wyden's question and answered in what he thought was the "least untruthful manner".[9]Later, in his letter of apology, Clapper wrote that he had only focused on Section 702 of theForeign Intelligence Surveillance Actduring his testimony to Congress, and therefore, he "simply didn't think" about Section 215 of thePatriot Act, which justifies the mass collection of telephone data from U.S. citizens. Clapper said: "My response was clearly erroneous—for which I apologize".[10] Look, for the longest time I was in fear that I couldn't actually say the phrase "computer network attack". This stuff is hideously over-classified. And it gets into the way of a mature public discussion as to what it is that we as a democracy want our nation to be doing up here in the cyber domain. To increase transparency and because it is in the public interest the Director of National Intelligence authorized the declassification and public release of the following documents pertaining to the collection of telephone metadata pursuant toSection 215 of the PATRIOT Acton July 31, 2013.[12]These documents were: On July 19, 2013,Human Rights Watchsent a letter to the Obama administration, urging it to allow companies involved in the NSA's surveillance to report about these activities and to increasegovernment transparency.[16] In June 2013, British government officials issued a confidentialDA-Noticeto several press organizations, with the aim of restricting their ability to report on these leaks.[17]That same month, theUnited States Armybarred its personnel from access to parts of the website ofThe Guardianafter that site's publication of Snowden's leaks.[18]The entireGuardianwebsite was blocked for personnel stationed throughout Afghanistan, the Middle East, and South Asia.[19] According to a survey undertaken by the human rights groupPEN International, these disclosures have had achilling effecton American writers. Fearing the risk of being targeted by government surveillance, 28% of PEN's American members have curbed their usage of social media, and 16% haveself-censoredthemselves by avoiding controversial topics in their writings.[20][21] On August 18, 2013,David Miranda, partner of journalistGlenn Greenwald, was detained for nine hours under Schedule 7 of theUnited Kingdom'sTerrorism Act of 2000. Miranda was returning from Berlin, carrying 58,000GCHQdocuments on a single computer file[22]to Greenwald in Brazil. Greenwald described Miranda's detention as "clearly intended to send a message of intimidation to those of us who have been reporting on the NSA and GCHQ".[23][24][25][26]TheMetropolitan PoliceandHome SecretaryTheresa Maycalled Miranda's detention "legally and procedurally sound".[27]However,Lord Falconer of Thoroton, who helped introduce the bill in theHouse of Lords, said that under the act, police can only detain someone "to assess whether they are involved in the commission, preparation or instigation of terrorism." He said, "I am very clear that this does not apply, either on its terms or in its spirit, to Mr Miranda."[28]Antonio Patriotathe BrazilianMinister of External Relationssaid that Miranda's detention was "not justifiable". The reasons for Miranda's detention were sought from the police by British politicians and David Anderson Q.C., theIndependent Reviewer of Terrorism Legislation.[29]The United States government later said that British officials had given them a "heads up" about Miranda's detention, while adding that the decision to detain him had been a British one.[29] The GuardianeditorAlan Rusbridgersaid the newspaper had received legal threats from the British government and was urged to surrender all documents leaked by Snowden. Security officials from theGovernment Communications Headquarters(GCHQ) later made a visit to the newspaper's London headquarters to ensure that all computer hard drives containing Snowden's documents were destroyed.[27][30] After the NSA Director of Compliance John Delong was interviewed byThe Washington Postregarding these disclosures, theWhite Housesent a "prepared" statement toThe Postand ordered that "none of Delong's comments could be quoted on the record".The Postrefused to comply.[31][32] A criminal investigation of these disclosures, by Britain'sMetropolitan Police Service, was reported in November 2013.[33] On August 18, 2013,Amnesty Internationalasserted that if journalists maintain their independence and report critically about governments, they too may be "targeted" by the British government.[34] On August 20, 2013,Index on Censorshipargued that the British government's "threat of legal action" againstThe Guardianwas a "direct attack on press freedom in the UK".[35] On September 4, 2013,U.N. Special RapporteurFrank La Ruestressed that the "protection of national security secrets must never be used as an excuse tointimidatethe press into silence."[36] Five Latin American countries—Bolivia, Cuba, Ecuador, Nicaragua and Venezuela—voiced their concerns to the UN Secretary-GeneralBan Ki-moonafter the plane of Bolivia's PresidentEvo Moraleswas denied entry by a number of western European countries, and was forced to reroute to Austria based on "suspicion that United States whistleblowerEdward Snowdenwas on board".[37]Ban said it was important to prevent such incidents from occurring in the future and emphasized that "A Head of State and his or her aircraft enjoy immunity and inviolability".[37] On August 8, 2013,Lavabit, a Texas-based secure email service provider reportedly used by Snowden, abruptly announced it was shutting down operations after nearly 10 years of business.[38]The owner, Ladar Levison, posted a statement online saying he would rather go out of business than "become complicit in crimes against the American people."[38]He also said that he was barred by law from disclosing what he had experienced over the preceding 6 weeks, and that he was appealing the case in theU.S. Fourth Circuit Court of Appeals.[38]Multiple sources speculated that the timing of the statement suggested that Lavabit had been targeted by the US government in its pursuit of information about Snowden.[38][39][40][41][42]The following day, a similar email service,Silent Circle, preemptively shut down in order to "prevent spying".[43]Snowden said about the Lavabit closure, "Ladar Levison and his team suspended the operations of their 10-year-old business rather than violate the Constitutional rights of their roughly 400,000 users. The President, Congress, and the Courts have forgotten that the costs of bad policy are always borne by ordinary citizens, and it is our job to remind them that there are limits to what we will pay." He said that "internet titans" likeGoogleshould ask themselves why they weren't "fighting for our interests the same way small businesses are."[44] In March 2014,The New York Timesreported that revelations of NSA spying had cost U.S. tech companies, includingMicrosoftandIBM, over $1 billion. A senior analyst at theInformation Technology and Innovation Foundationsaid it was "clear to every single tech company that this is affecting their bottom line," and predicted that the U.S. cloud computing industry could lose $35 billion by 2016.Forrester Research, an independent technology and market research company, said losses could be as high as $180 billion, or 25 percent of industry revenue.[45] U.S. Army General Keith Alexander, then director of the NSA, said in June 2013, "These leaks have caused significant and irreversible damage to our nation's security." He added that "the irresponsible release of classified information about these programs will have a long-term detrimental impact on the intelligence community's ability to detect future attacks."[4] In August,Chairman of the Joint Chiefs of StaffMartin Dempseysaid that Snowden "has caused us some considerable damage to our intelligence architecture. Our adversaries are changing the way that they communicate."[46] In October, former GCHQ director SirDavid Omand, speaking of how useful forRussia's intelligence servicesSnowden's stay in Russia could be, told the BBC: "Part of me says that not even theKGBin its heyday ofPhilby,BurgessandMacleanin the 1950s could have dreamt of acquiring 58,000 highly classified intelligence documents."[47]Snowden stated that he had not leaked any documents to Russia.[48] Also in October,Andrew Parker, director general of the UKSecurity Service, maintained that the exposing of intelligence techniques had given extremists the ability to evade the intelligence agencies; he said, "It causes enormous damage to make public the reach and limits of GCHQ techniques. Such information hands the advantage to the terrorists. It is the gift they need to evade us and strike at will."[49] That same month, theFinancial Timeseditorialized that security chiefs were "right to be alarmed, knowing that terrorists can change their modus operandi in response to new information on their capabilities" and there was "no firm evidence that the intelligence agencies are using these new collection capabilities for malign ends."[22] On June 9, 2013, U.S.Director of National IntelligenceJames R. Clapper, referring to the surveillance activities lately reported inThe Washington PostandThe Guardian, stressed the activities were lawful, conducted under authorities approved by theU.S. Congress, and that "significant misimpressions" had resulted from the articles published; he called the disclosures of "intelligence community measures used to keep Americans safe" "reckless".[50]He condemned the leaks as having done "huge, grave damage" to the U.S. intelligence capabilities.[51] That same day, aWe the Peoplepetition was launched via thewhitehouse.govwebsite seeking "a full, free and absolute pardon for any crimes [Snowden] has committed or may have committed related to blowing the whistle on secret NSA surveillance programs."[52]The petition attained 100,000 signatures within two weeks, thus meeting the threshold and requiring an official response from the White House.[53]The White House answered on July 28, 2015, declining to pardon Snowden. In a response written byLisa Monaco, Obama's homeland security and terrorism advisor, the White House said Snowden's disclosures had severe consequences for national security and that he should come home to be judged by a jury of his peers.[54] Also in June 2013, the U.S. military blocked access to parts ofThe Guardianwebsite related to government surveillance programs for thousands of defense personnel across the country,[55]and toThe Guardian's entire website for personnel stationed in Afghanistan, the Middle East, and South Asia.[19][56]A spokesperson described the filtering as a routine "network hygiene" measure intended to mitigate unauthorized disclosures of classified information onto theDepartment of Defense's unclassified networks.[56] In August 2013,U.S. PresidentBarack Obamasaid that he had called for a review of U.S. surveillance activities even before Snowden had begun revealing details of the NSA's operations.[57]Obama announced that he was directing DNI Clapper "to establish a review group on intelligence and communications technologies" that would brief and later report to the president.[1][58]In December, the task force issued 46 recommendations that, if adopted, would subject the NSA to additional scrutiny by the courts, Congress, and the president, and would strip the NSA of the authority to infiltrate American computer systems using "backdoors" in hardware or software.[2]Panel memberGeoffrey R. Stonesaid there was no evidence that the bulk collection of phone data had stopped anyterror attacks.[3] On October 31, 2013,U.S. Secretary of StateJohn Kerrystated that "in some cases" the NSA had "reached too far" in some of its surveillance activities, and promised that it would be stopped.[59][60] In January 2014, James Clapper gave public testimony to a session of theSenate Intelligence Committee. He asked that "Snowden and his accomplices" return the purloined NSA documents. When Clapper was asked whether the word "accomplices" referred to journalists, Clapper's spokesperson Shawn Turner responded, "Director Clapper was referring to anyone who is assisting Edward Snowden to further threaten our national security through the unauthorized disclosure of stolen documents related to lawful foreign intelligence collection programs."[61] Also in January 2014, a review by thePrivacy and Civil Liberties Oversight Board(PCLOB) concluded that the NSA's collection of every U.S. phone record on a daily basis violates the legal restrictions of the statute cited to authorize it. "The Section 215 bulk telephone records program," PCLOB reported, "lacks a viable legal foundation under Section 215 [of thePatriot Act], implicates constitutional concerns under the First and Fourth Amendments, raises serious threats to privacy and civil liberties as a policy matter, and has shown only limited value. As a result, the Board recommends that the government end the program."[62]The White House rejected the findings, saying "We simply disagree with the board's analysis on the legality of the program."[63]A second PCLOB review, in July 2014, concluded that the NSA's surveillance program targeting foreigners overseas is lawful, under Section 702 of theFISA Amendments Act of 2008, and effective but that certain elements push "close to the line" of being unconstitutional.[64]The July report said that the Board was "impressed with the rigor of the government’s efforts to ensure that it acquires only those communications it is authorized to collect, and that it targets only those persons it is authorized to target. Moreover, the government has taken seriously its obligations to establish and adhere to a detailed set of rules regarding how it handles U.S. person communications that it acquires under the program."[65] Reactions to the global surveillance disclosures among members of theU.S. Congressinitially were largely negative.[66]Speaker of the HouseJohn Boehner[67]and senatorsDianne Feinstein[68]andBill Nelson[69]called Snowden a traitor, and several senators and representatives joined them in calling for Snowden's arrest and prosecution.[68][70][71]Arizona SenatorJohn McCaincriticized politicians who voted in favor of the PATRIOT Act but were outraged by the NSA spying on phone calls by saying, "We passed the Patriot Act. We passed specific provisions of the act that allowed for this program to take place, to be enacted in operation. Now, if members of Congress did not know what they were voting on, then I think that that's their responsibility a lot more than it is the government's."[72] In July 2013, theU.S. Senate Committee on Appropriationsunanimously adopted an amendment by SenatorLindsey Grahamto the "Fiscal Year 2014 Department of State, Foreign Operations, and Related Programs Appropriations Bill"[73]that would have sought sanctions against any country offering asylum to Snowden.[74][75][76] Also in July 2013, Rep.Justin Amash(R-Mich.) and Rep.John Conyers(D-Mich.) proposed the "Amash–Conyers Amendment" to the National Defense Authorization Act.[77]If passed, the amendment would have curtailed "the ongoing dragnet collection and storage of the personal records of innocent Americans." The House rejected the amendment by a vote of 205–217.[78]An analysis indicated that those who voted against the amendment received 122% more in campaign contributions from defense contractors than those who voted in favor.[79] In September 2013, SenatorsMark Udall,Richard Blumenthal,Rand PaulandRon Wydenintroduced a "sweeping surveillance reform" proposal.[80]Called the most comprehensive proposal to date, the "Intelligence Oversight and Surveillance Reform Act" seeks to end the bulk collection of communication records made legal in section 215 of thePatriot Actand to reign in other "electronic eavesdropping programs".[81]Wyden toldThe Guardianthe Snowden disclosures have "caused a sea change in the way the public views the surveillance system". The draft bill is a blend of 12 similar proposals as well as other legislative proposals.[82] In October 2013, CongressmanJim Sensenbrenner, author of thePatriot Act, introduced a proposal to the House of Representatives called theUSA Freedom Actto end the bulk collection of Americans' metadata and reform theForeign Intelligence Surveillance Act(FISA) court.[83]Senators introduced two different reform proposals. One, theUSA Freedom Act(H.R. 3361/ S. 1599),[84][85][86][87]would effectively halt “bulk” records collection under the USA Patriot Act, while it also would require a warrant to deliberately search for the e-mail and phone call content of Americans that is collected as part of a surveillance program targeting foreigners located overseas. Another proposal is theFISA Improvements Actthat would preserve the program while strengthening privacy protections. It would also codify the requirement that analysts 'have a “reasonable articulable suspicion” that a phone number is associated with terrorism' in order to query the NSA phone records database; require that the FISA court promptly review each such determination, and limit the retention period for phone records. Both proposals call for the introduction of a special advocate to promote privacy interests before the FISA court.[88] In April 2014,The Washington Postreported that some federal judges holding low-level positions had been balking at sweeping requests by law enforcement for cellphone and other sensitive personal data. ThePostcalled it "a small but growing faction, including judges in Texas, Kansas, New York and Pennsylvania," and said the judges deemed the requests overly broad and at odds with basic constitutional rights. Although some rulings were overturned, said thePost,their decisions have shaped when and how investigators can seize information detailing the locations, communications and online histories of Americans. Albert Gidari Jr., a partner atPerkins Coiewho represents technology and telecommunications companies, told thePostthat these judges "don't want to be the ones who approve an order that later becomes public and embarrassing…. Nobody likes to be characterized as a rubber stamp." According to thePost,some legal observers have called this "the Magistrates' Revolt," which began several years ago; however, it gained power amid mounting public anger about government surveillance capabilities after the NSA disclosures.[89] In the wake of the NSA leaks, conservative public interest lawyer andJudicial WatchfounderLarry Klaymanfiled a lawsuit claiming that the federal government had unlawfully collected metadata for his telephone calls and was harassing him (seeKlayman v. Obama), and theAmerican Civil Liberties Union(ACLU) filed a lawsuit against Director of National Intelligence James Clapper alleging that the NSA's phone records program was unconstitutional (seeACLU v. Clapper). Once the judge in each case had issued rulings seemingly at odds with one another, Gary Schmitt (former staff director of theSenate Select Committee on Intelligence) wrote inThe Weekly Standard, "The two decisions have generated public confusion over the constitutionality of the NSA's data collection program—a kind of judicial 'he-said, she-said' standoff."[90] In 2014, legislators in several USA states introduced bills based upon amodel act, written by anti-surveillance activists, called the "Fourth Amendment Protection Act". The bills seek to prohibit the respectivestate governmentfrom co-operating with the NSA in various ways: the Utah bill would prohibit provision of water to NSA facilities;[91]the California bill would prohibit state universities from conducting research for the NSA;[92]and the Kansas bill would require a search warrant for data collection.[93] The disclosures have inspired public protests. After the June 2013 release, apolitical movementknown as "Restore the Fourth" was formed in the United States and rapidly gained momentum. In early July,Restore the Fourthwas responsible for protests in more than 80 cities including Seattle, San Francisco, Denver, Chicago, Los Angeles and New York City. These protests were loosely coordinated via online messaging services and involved protesters from all over the United States.[94] On October 26, 2013, an anti-NSA rally called "Stop Watching Us" was held in Washington, D.C., billed by organizers as the "largest rally yet to protest mass surveillance". A diverse coalition of over 100 advocacy groups organized the event and attracted thousands of protesters calling for an end to mass surveillance.[95]Speakers included former governorGary Johnsonand NSA whistleblowerThomas Drake.[96][97] "The Day We Fight Back" was a protest againstmass surveillanceby theNational Security Agency(NSA)[98][99]on February 11, 2014.[98][99]The 'day of action' primarily took the form of webpage banner-advertisements urging viewers to contact their lawmakers over issues surrounding cyber surveillance and a free Internet.[98][99]By February 10, more than 5,700 websites and organizations had signed up to show support by featuringThe Day We Fight Backbanners for 24 hours.[100] As February 11 drew to a close,The New York Timesposted a blog titled "The Day the Internet Didn't Fight Back," reporting that "the protest on Tuesday barely registered.Wikipediadid not participate.Reddit… added an inconspicuous banner to its homepage. Sites likeTumblr,MozillaandDuckDuckGo, which were listed as organizers, did not include the banner on their homepages. The eight major technology companies—Google,Microsoft,Facebook,AOL,Apple,Twitter,YahooandLinkedIn- only participated Tuesday insofar as having a joint website flash the protest banner.[101] An analysis released by theNew America Foundationin January 2014 reviewed 225 terrorism cases since theSeptember 11 attacksfound that the NSA's bulk collection of phone records "has had no discernible impact on preventing acts of terrorism," and that U.S. government claims of the program's usefulness were "overblown."[102][103] On June 17, 2013, nearly two weeks after the first disclosure was published,Chinese Foreign MinistryspokeswomanHua Chunyingsaid at a daily briefing, "We believe the United States should pay attention to the international community's concerns and demands and give the international community the necessary explanation."[104] TheSouth China Morning Postpublished a poll of Hong Kong residents conducted while Snowden was still in Hong Kong that showed that half of the 509 respondents believed the Chinese government should not surrender Snowden to the United States if Washington raises such a request; 33 percent of those polled think of Snowden as a hero, 12.8 percent described him as a traitor, 36 percent said he was neither.[105] Referring to Snowden's presence in the territory, Hong Kong chief executiveLeung Chun-Yingassured that the government would "handle the case of Mr Snowden in accordance with the laws and established procedures of Hong Kong [and] follow up on any incidents related to the privacy or other rights of the institutions or people in Hong Kong being violated."[106]Pan-democratlegislatorsGary FanandClaudia Mosaid that the perceived U.S. prosecution against Snowden will set "a dangerous precedent and will likely be used to justify similar actions" by authoritarian governments.[107]During Snowden's stay, the two main political groups, the pan-democrats andPro-Beijing camp, found rare agreement to support Snowden.[108][109]The pro-BeijingDAB partyeven organised a separate march to Government headquarters for Snowden. ThePeople's Dailyand theGlobal Timeseditorials of June 19 stated respectively that the central Chinese government was unwilling to be involved in a "mess" caused by others, and that the Hong Kong government should follow the public opinion and not concern itself with Sino-US relations.[110]ATsinghua Universitycommunications studies specialist, Liu Jianming, interpreted the two articles as suggesting that the mainland government did not want further involvement in the case and that the Hong Kong government should handle it independently.[110] After Snowden left Hong Kong, Chinese-language newspapers such as theMing Paoand theOriental Dailyexpressed relief that Hong Kong no longer had to shoulder the burden of the Snowden situation.[111]Mainland experts said that, although the Central Government did not want to appear to be intervening in the matter, it was inconceivable that the Hong Kong government acted independently in a matter that could have far-reaching consequences for Sino-US relations. One expert suggested that, by doing so, China had "returned the favor" for their not having accepted theasylum plea from Wang Lijunin February 2012.[112]The official Chinese Communist Party mouthpiece, thePeople's Dailydenied the US government accusation that the PRC central government had allowed Snowden to escape, and said that Snowden helped in "tearing off Washington's sanctimonious mask."[113] On November 2, 2013, theMalaysian Foreign MinisterAnifah Amansummoned the ambassadors of Australia and the United States to protest an alleged American-led spying network in Asia.[114] Early in July 2013, the European Commissioner for Home Affairs,Cecilia Malmström, wrote to two U.S. officials that "mutual trust and confidence have been seriously eroded and I expect the U.S. to do all that it can to restore them".[115] On October 20, 2013, a committee at theEuropean Parliamentbacked a measure that, if enacted, would require American companies to seek clearance from European officials before complying with United States warrants seeking private data. The legislation has been under consideration for two years. The vote is part of efforts in Europe to shield citizens from online surveillance in the wake of revelations about a far-reaching spying program by NSA.[116] TheEuropean Councilmeeting at the end of October 2013 in its statement signed by all 28 EU leaders while stressing that "intelligence gathering is a vital element in the fight against terrorism" and noting "the close relationship between Europe and the USA and the value of that partnership", said that this must "be based on respect and trust," a lack of which "could prejudice the necessary cooperation in the field of intelligence gathering".[117][118] On December 23, 2013, the European Parliament released the results[119]of its inquiry into the NSA activities.[120]"The European Parliament's committee inquiry into the spying scandal,"Deutsche Wellereported, "was the first of this scale. No individual EU country has looked into the scandal this thoroughly and no EU government has been as explicit in its criticism of the US government."[121]The draft report covered the preceding six months and was, said Deutsche Welle, "hard on all sides—including governments and companies in the EU." Presented byClaude Moraes, BritishMember of the European Parliamentfrom theProgressive Alliance of Socialists and Democrats, the report found what it called "compelling evidence of the existence of far-reaching, complex and highly technologically advanced systems designed by US and some Member States' intelligence services to collect, store and analyze communication and location data and metadata of all citizens around the world on an unprecedented scale and in an indiscriminate and non-suspicion-based manner." The fight against terrorism, said the report, can "never in itself be a justification for untargeted, secret and sometimes even illegal mass surveillance programs."[121]Moraes and his fellow rapporteurs considered it "very doubtful that data collection of such magnitude is only guided by the fight against terrorism, as it involves the collection of all possible data of all citizens; points therefore to the possible existence of other power motives such as political andeconomic espionage."[119] On October 21, 2013, France summonedCharles Rivkin, the U.S. Ambassador to France, to clarify and explain the NSA's surveillance of French citizens.[122]Speaking to journalists, PresidentFrançois Hollandesaid, "We cannot accept this kind of behaviour between partners and allies. We ask that this immediately stop."[123]According toThe Wall Street Journal, data allegedly collected by the NSA in France was actually collected by French intelligence agencies outside France and then shared with the United States.[124] According toThe Wall Street Journal, "The outcry over NSA eavesdropping has been most pronounced in Germany, a country whose history of dictatorship has left the population particularly sensitive to violations of personal privacy."[126]It was revealed that, beginning in 2002, German Chancellor Angela Merkel’s phone has been on an NSA target list.[127] On July 1, 2013, theGerman Foreign MinistrysummonedPhilip D. Murphy, the U.S. Ambassador to Germany, over allegations that the NSA had spied on institutions of the European Union.[128] In early August 2013, Germany canceled largely symbolicCold War-era administrative agreements with Britain, the United States and France, which had granted the Western countries which had troops stationed in West Germany the right to request surveillance operations to protect those forces.[129]At the end of August, under the orders of theGerman domestic intelligence agency, a federal police helicopter conducted a low-altitude flyover of theUnited States ConsulateinFrankfurt, apparently in search of suspected clandestine eavesdropping facilities. A German official called it a symbolic "shot across the bow."[130] On October 24, 2013, EU heads of state met to discuss a proposed data protection law. The representatives of Italy, Poland and France wanted the law to be passed before the May 2014 European Parliament elections. Germany, represented byAngela Merkel, and the UK, represented byDavid Cameron, favored a slower implementation; their wishes prevailed. About the "Five Eyes" espionage alliance, Merkel remarked, "Unlike David, we are unfortunately not part of this group."[131]Also on October 24, the Foreign Ministry summonedJohn B. Emerson, the U.S. Ambassador to Germany, to clarify allegations that the NSA had tapped into ChancellorAngela Merkel’s mobile phone.[132][133] While the German government had hoped for a "no spy" agreement with the U.S., by January 2014 it was reported that Germany had "given up hope" of securing such a treaty.[134]The Foreign Office'sPhilipp Mißfelderdeclared that "the current situation in transatlantic relations is worse than it was at the low-point in 2003 during the Iraq War".[135] TheGerman Parliamentary Committee investigating the NSA spying scandalwas started on March 20, 2014, by theGerman Parliamentin order to investigate the extent and background of foreign secret services spying in Germany and to search for strategies on how to protect German telecommunication with technical means.[136] It was revealed that Germany'sBNDintelligence service has covertly monitored European defence interests and politicians inside Germany at the request of the NSA.[137] Italy's Prime MinisterEnrico LettaaskedJohn Kerry, the U.S. Secretary of State, to clarify if the NSA had illegally intercepted telecommunications in Italy.[138]On October 23, 2013, theItalian Interior MinisterAngelino Alfanotold reporters, "We have a duty to [provide] clarity to Italian citizens—we must obtain the whole truth and tell the whole truth, without regard for anyone."[139] On October 25, 2013, the Spanish Prime MinisterMariano RajoysummonedJames Costos, the U.S. Ambassador to Spain, to clarify reports about the NSA's surveillance of the Spanish government.[140]Spanish EU MinisterÍñigo Méndez de Vigosaid such practices, if true, were "inappropriate and unacceptable". An EU delegation was to meet officials in Washington to convey their concerns.[141]According toThe Wall Street Journal, data allegedly collected by the NSA in Spain was actually collected by Spanish intelligence agencies outside Spain and then shared with the United States.[124]On October 29,The Washington Postreported that an anonymous "senior Obama administration official" had also described such an arrangement with Spain.[142] British Foreign MinisterWilliam Hagueadmitted that Britain's GCHQ was also spying and collaborating with the NSA, and defended the two agencies' actions as "indispensable."[143][144][145]British Prime Minister David Cameron issued a veiled threat to resort toprior restraint, through high court injunctions and DA-Notices, ifThe Guardiandid not obey his demands to stop reporting its revelations on spying by GCHQ and the NSA,[146]a development that "alarmed" theCommittee to Protect Journalists[147]and spurred 70 of the world's leading human rights organizations to write an open letter to the newspaper expressing their concern about press and other freedoms in the UK.[148][149] In 2014 theDirector of GCHQauthored an article in theFinancial Timeson the topic ofinternet surveillance, stating that "however much [large US technology companies] may dislike it, they have become the command and control networks of choice for terrorists and criminals" and that GCHQ and its sister agencies "cannot tackle these challenges at scale without greater support from the private sector", arguing that most internet users "would be comfortable with a better and more sustainable relationship between the [intelligence] agencies and the tech companies". Since the 2013 surveillance disclosures, large US technology companies have improved security and become less co-operative with foreign intelligence agencies, including those of the UK, generally requiring a US court order before disclosing data.[150][151] On October 24, 2013, theMexican Foreign MinisterJosé Antonio Meade Kuribreñamet with U.S. AmbassadorEarl Anthony Wayneto discuss allegations reported byDer Spiegelthat the NSA hacked the emails of former presidentFelipe Calderónwhile in office.[152] FormerForeign MinisterBob Carrremarked that the U.S. would be critical of any other nation that failed to prevent the release of such sensitive documents. "Certainly if it had gone the other way," said Carr, "if there'd been some official in Canberra, some contractor in Canberra, who allowed a slew of material as sensitive as this to be plastered over the world's media, America would be saying very stern things to someone they'd be regarding as a woefully immature ally and partner."[153] On November 1, 2013, theForeign Ministryof Indonesia summonedAustralia's Ambassador Greg Moriarty to explain his country's surveillance of PresidentSusilo Bambang Yudhoyonoand other Indonesian political leaders.[154]On November 18, the Australian ambassador was summoned again by Indonesian government officials, who pledged to review all types of cooperation with Australia. The Indonesian Foreign MinisterMarty Natalegawacalled the spying "unacceptable", and added that "This is an unfriendly, unbecoming act between strategic partners." The Indonesian ambassador to Australia was also recalled as a response to the incident.[155] The Brazilian government expressed outrage at the revelations that the National Security Agency directly targeted the communications of presidentDilma Rousseffand her top aides.[156]It called the incident an "unacceptable violation of sovereignty" and requested an immediate explanation from the U.S. government.[157] Brazil's government signaled it would consider cancelling Rousseff'sstate visitto Washington—the only state visit for a foreign leader scheduled this year.[158]A senior Brazilian official stated the country would downgrade commercial ties unless Rousseff received a public apology.[158]That would include ruling out the $4 billion purchase ofBoeing F-18 Super Hornetfighters and cooperation on oil and biofuels technology, as well as other commercial agreements.[158]Petrobrasannounced that it was investingR$21 billion over five years to improve its data security.[159] Ecuadorresponded by renouncing U.S. trade benefits and offering to pay a similar amount, $23 million per year, to finance human rights training in America to help avoid what Ecuador's Foreign MinisterRicardo Patiñocalled "violations of privacy, torture and other actions that are denigrating to humanity."[160][161] Russia, South Africa, and Turkey reacted angrily after it was revealed that their diplomats had been spied on during the2009 G-20 London summit.[162] London-basedIndex on Censorshipcalled upon the U.S. government to uphold theFirst Amendment, saying, "The mass surveillance of citizens' private communications is unacceptable—it both invades privacy and threatens freedom of expression. The US government cannot use the excuse of national security to justify either surveillance on this scale or the extradition of Snowden for revealing it."[163] In July 2013, speaking to the foreign affairs committee of theIcelandic ParliamentinReykjavík,UN Secretary-GeneralBan Ki-moonsaid that in his personal opinion, Edward Snowden had misused his right to digital access and created problems that outweigh the benefits of public disclosure.Birgitta Jónsdóttir, an Icelandic legislator who in 2010 assistedWikiLeaksin publishing U.S. state secrets leaked byChelsea Manning, expressed alarm at Ban's remarks. She said that he "seemed entirely unconcerned about the invasion of privacy by governments around the world, and only concerned about how whistleblowers are misusing the system."[164] InThe Blacklistepisode "The Alchemist (No. 101)" (season 1, episode 12, minutes 00:22:00-00:22:55), one of the technical experts Red tasked to reconstitute documents shredded by American governmental agencies reports: "We actually reached out to the Germans for help. They're the ones who designed the software." Red replies: "Ah, the Germans. Despite recent headlines, they're still the best at keeping an eye on their fellow man".[165][166][167]
https://en.wikipedia.org/wiki/Reactions_to_global_surveillance_disclosures
In cryptography,subliminal channelsarecovert channelsthat can be used to communicate secretly in normal looking communication over aninsecure channel.[1]Subliminal channels indigital signaturecrypto systems were found in 1984 byGustavus Simmons. Simmons describes how the "Prisoners' Problem" can be solved through parameter substitution indigital signaturealgorithms.[2][a] Signature algorithms likeElGamalandDSAhave parameters which must be set with random information. He shows how one can make use of these parameters to send a message subliminally. Because the algorithm's signature creation procedure is unchanged, the signature remains verifiable and indistinguishable from a normal signature. Therefore, it is hard to detect if the subliminal channel is used. The broadband and the narrow-band channels can use different algorithm parameters. A narrow-band channel cannot transport maximal information, but it can be used to send the authentication key or datastream. Research is ongoing : further developments can enhance the subliminal channel, e.g., allow for establishing a broadband channel without the need to agree on an authentication key in advance. Other developments try to avoid the entire subliminal channel. An easy example of a narrowband subliminal channel for normal human-language text would be to define that an even word count in a sentence is associated with the bit "0" and an odd word count with the bit "1". The question "Hello, how do you do?" would therefore send the subliminal message "1". TheDigital Signature Algorithmhas one subliminal broadband[3]and three subliminal narrow-band channels[4] At signing the parameterk{\displaystyle k}has to be set random. For the broadband channel this parameter is instead set with a subliminal messagem′{\displaystyle m'}. The formula for message extraction is derived by transposing the signature values{\displaystyle s}calculation formula. In this example, an RSA modulus purporting to be of the form n = pq is actually of the form n = pqr, for primes p, q, and r. Calculation shows that exactly one extra bit can be hidden in the digitally signed message. The cure for this was found by cryptologists at theCentrum Wiskunde & InformaticainAmsterdam, who developed aZero-knowledge proofthat n is of the form n = pq.[citation needed]This example was motivated in part byThe Empty Silo Proposal. Here is a (real, working) PGP public key (using the RSA algorithm), which was generated to include two subliminal channels - the first is the "key ID", which should normally be random hex, but below is "covertly" modified to read "C0DED00D". The second is the base64 representation of the public key - again, supposed to be all random gibberish, but the English-readable message "//This+is+Christopher+Drakes+PGP+public+key//Who/What+is+watcHIng+you//" has been inserted. Adding both these subliminal messages was accomplished by tampering with the random number generation during the RSA key generation phase. A modification to theBrickell and DeLaurentis signature schemeprovides a broadband channel without the necessity to share the authentication key.[5]TheNewton channelis not a subliminal channel, but it can be viewed as an enhancement.[6] With the help of thezero-knowledge proofand thecommitment schemeit is possible to prevent the usage of the subliminal channel.[7][8] It should be mentioned that this countermeasure has a 1-bit subliminal channel. The reason for that is the problem that a proof can succeed or purposely fail.[9] Another countermeasure can detect, but not prevent, the subliminal usage of the randomness.[10]
https://en.wikipedia.org/wiki/Subliminal_channel
Action at a distanceis ananti-patternincomputer sciencein which behavior in one part of aprogramvaries wildly based on difficult or impossible to identifyoperationsin another part of the program. The way to avoid the problems associated with action at a distance is a proper design, which avoidsglobal variablesand alters data only in a controlled andlocalmanner, or usage of apure functional programmingstyle withreferential transparency. The term is based on the concept ofaction at a distancein physics, which may refer to a process that allows objects to interact without a mediator particle such as thegluon. In particular,Albert Einsteinreferred toquantum nonlocalityas "spooky action at a distance". Software bugsdue to action at a distance may arise because a program component is doing something at the wrong time, or affecting something it should not. It is very difficult, however, to track down which component is responsible.Side effectsfrom innocent actions can put the program in an unknown state, so local data is not necessarily local. The solution in this particular scenario is to define which components should be interacting with which others. A proper design that accurately defines the interface between parts of a program, and that avoids shared states, can largely eliminate problems caused by action at a distance. This example, from thePerlprogramming language, demonstrates an especially serious case of action at a distance (note the$[variable wasdeprecatedin later versions of Perl[1]): Arrayindices normally begin at 0 because the value of$[is normally 0; if you set$[to 1, then arrays start at 1, which makesFortranandLuaprogrammers happy, and so we see examples like this in theperl(3)man page: And of course you could set$[to 17 to have arrays start at some random number such as 17 or 4 instead of at 0 or 1. This was a great way to sabotage module authors. Fortunately, sanity prevailed. These features are now recognized to have been mistakes. The perl5-porters mailing list now has a catchphrase for such features: they're called "action at a distance". The principle is that a declaration in one part of the program shouldn't drastically and invisibly alter the behavior of some other part of the program. Properobject-oriented programminginvolves design principles that avoid action at a distance. TheLaw of Demeterstates that an object should only interact with other objects near itself. Should action in a distant part of the system be required then it should be implemented by propagating a message. Proper design severely limits occurrences of action at a distance, contributing to maintainable programs. Pressure to create anobject orgyresults from poor interface design, perhaps taking the form of aGod object, not implementing true objects, or failing to heed the Law of Demeter. One of the advantages offunctional programmingis that action at a distance is de-emphasised, sometimes to the point of being impossible to express at all in the source language. Being aware of the danger of allowing action at a distance into a design, and being able to recognize the presence of action at a distance, is useful in developing programs that are correct, reliable and maintainable. Given that the majority of the expense of a program may be in the maintenance phase, and that action at a distance makes maintenance difficult, expensive and error prone, it is worth effort during design to avoid.
https://en.wikipedia.org/wiki/Action_at_a_distance_(computer_programming)
Indigital logic, adon't-care term[1][2](abbreviatedDC, historically also known asredundancies,[2]irrelevancies,[2]optional entries,[3][4]invalid combinations,[5][4][6]vacuous combinations,[7][4]forbidden combinations,[8][2]unused statesorlogical remainders[9]) for a function is an input-sequence (a series of bits) for which the function output does not matter. An input that is known never to occur is acan't-happen term.[10][11][12][13]Both these types of conditions are treated the same way in logic design and may be referred to collectively asdon't-care conditionsfor brevity.[14]The designer of a logic circuit to implement the function need not care about such inputs, but can choose the circuit's output arbitrarily, usually such that the simplest, smallest, fastest or cheapest circuit results (minimization) or the power-consumption is minimized.[15][16] Don't-care terms are important to consider in minimizing logic circuit design, including graphical methods likeKarnaugh–Veitch mapsand algebraic methods such as theQuine–McCluskey algorithm. In 1958,Seymour Ginsburgproved that minimization of states of afinite-state machinewith don't-care conditions does not necessarily yield a minimization of logic elements. Direct minimization of logic elements in such circuits was computationally impractical (for large systems) with the computing power available to Ginsburg in 1958.[17] Examples of don't-care terms are the binary values 1010 through 1111 (10 through 15 in decimal) for a function that takes abinary-coded decimal(BCD) value, because a BCD value never takes on such values (so calledpseudo-tetrades); in the pictures, the circuit computing the lower left bar of a7-segment displaycan be minimized toab+acby an appropriate choice of circuit outputs fordcba= 1010…1111. Write-only registers, as frequently found in older hardware, are often a consequence of don't-care optimizations in the trade-off between functionality and the number of necessary logic gates.[18] Don't-care states can also occur inencoding schemesandcommunication protocols.[nb 1] "Don't care" may also refer to an unknown value in amulti-valued logicsystem, in which case it may also be called anX valueordon't know.[19]In theVeriloghardware description languagesuch values are denoted by the letter "X". In theVHDLhardware description language such values are denoted (in the standard logic package) by the letter "X" (forced unknown) or the letter "W" (weak unknown).[20] An X value does not exist in hardware. In simulation, an X value can result from two or more sources driving a signal simultaneously, or the stable output of aflip-flopnot having been reached. In synthesized hardware, however, the actual value of such a signal will be either 0 or 1, but will not be determinable from the circuit's inputs.[20] Further considerations are needed for logic circuits that involve somefeedback. That is, those circuits that depend on the previous output(s) of the circuit as well as its current external inputs. Such circuits can be represented by astate machine. It is sometimes possible that some states that are nominally can't-happen conditions can accidentally be generated during power-up of the circuit or else by random interference (likecosmic radiation,electrical noiseor heat). This is also calledforbidden input.[21]In some cases, there is no combination of inputs that can exit the state machine into a normal operational state. The machine remains stuck in the power-up state or can be moved only between other can't-happen states in a walled garden of states. This is also called ahardware lockuporsoft error. Such states, while nominally can't-happen, are not don't-care, and designers take steps either to ensure that they are really made can't-happen, or else if they do happen, that they create adon't-care alarmindicating an emergency state[21]forerror detection, or they are transitory and lead to a normal operational state.[22][23][24]
https://en.wikipedia.org/wiki/Don%27t-care_term
InCandC++, asequence pointdefines any point in acomputer program'sexecutionat which it is guaranteed that allside effectsof previous evaluations will have been performed, and no side effects from subsequent evaluations have yet been performed. They are a core concept for determining the validity of and, if valid, the possible results of expressions. Adding more sequence points is sometimes necessary to make an expression defined and to ensure a single valid order of evaluation. WithC11andC++11, usage of the term sequence point has been replaced by sequencing. There are three possibilities:[1][2][3] The execution of unsequenced evaluations can overlap, leading to potentially catastrophicundefined behaviorif they sharestate. This situation can arise inparallel computations, causingrace conditions, but undefined behavior can also result in single-threaded situations. For example,a[i] = i++;(whereais an array andiis an integer) has undefined behavior. Consider twofunctionsf()andg(). In C and C++, the+operator is not associated with a sequence point, and therefore in theexpressionf()+g()it is possible that eitherf()org()will be executed first. The comma operator introduces a sequence point, and therefore in the codef(),g()the order of evaluation is defined: firstf()is called, and theng()is called. Sequence points also come into play when the same variable is modified more than once within a single expression. An often-cited example is theCexpressioni=i++, which apparently both assignsiits previous value and incrementsi. The final value ofiis ambiguous, because, depending on the order of expression evaluation, the increment may occur before, after, or interleaved with the assignment. The definition of a particular language might specify one of the possible behaviors or simply say the behavior isundefined. In C and C++, evaluating such an expression yields undefined behavior.[4]Other languages, such asC#, define theprecedenceof the assignment and increment operator in such a way that the result of the expressioni=i++is guaranteed. In C[5]and C++,[6]sequence points occur in the following places. (In C++,overloaded operatorsact like functions, and thus operators that have been overloaded introduce sequence points in the same way as function calls.) Partially because of the introduction of language support for threads, C11 and C++11 introduced new terminology for evaluation order. An operation may be "sequenced before" another, or the two can be "indeterminately" sequenced (one must complete before the other) or "unsequenced" (the operations in each expression may be interleaved). C++17 restricted several aspects of evaluation order. Thenewexpression will always perform the memory allocation before evaluating the constructor arguments. The operators<<,>>,.,.*,->*, and the subscript and function call operator are guaranteed to be evaluated left to right (whether they are overloaded or not). For example, the code is newly guaranteed to calla,bandcin that order. The right-hand side of any assignment-like operator is evaluated before the left-hand side, so thatb() *= a();is guaranteed to evaluateafirst. Finally, although the order in which function parameters are evaluated remains implementation-defined, the compiler is no longer allowed to interleavesub-expressionsacross multiple parameters.[9]
https://en.wikipedia.org/wiki/Sequence_point
Incomputer programming, a program exhibitsundefined behavior(UB) when it contains, or is executing code for which itsprogramming language specificationdoes not mandate any specific requirements.[1]This is different fromunspecified behavior, for which the language specification does not prescribe a result, and implementation-defined behavior that defers to the documentation of another component of theplatform(such as theABIor thetranslatordocumentation). In theC programming community, undefined behavior may be humorously referred to as "nasal demons", after acomp.std.cpost that explained undefined behavior as allowing the compiler to do anything it chooses, even "to make demons fly out of your nose".[2] Some programming languages allow a program to operate differently or even have a different control flow from the source code, as long as it exhibits the same user-visibleside effects,if undefined behavior never happens during program execution. Undefined behavior is the name of a list of conditions that the program must not meet. In the early versions ofC, undefined behavior's primary advantage was the production of performantcompilersfor a wide variety of machines: a specific construct could be mapped to a machine-specific feature, and the compiler did not have to generate additional code for the runtime to adapt the side effects to match semantics imposed by the language. The program source code was written with prior knowledge of the specific compiler and of theplatformsthat it would support. However, progressive standardization of the platforms has made this less of an advantage, especially in newer versions of C. Now, the cases for undefined behavior typically represent unambiguousbugsin the code, for exampleindexing an arrayoutside of its bounds. By definition, theruntimecan assume that undefined behavior never happens; therefore, some invalid conditions do not need to be checked against. For acompiler, this also means that variousprogram transformationsbecome valid, or their proofs of correctness are simplified; this allows for various kinds of optimizations whose correctness depend on the assumption that the program state never meets any such condition. The compiler can also remove explicit checks that may have been in the source code, without notifying the programmer; for example, detecting undefined behavior by testing whether it happened is not guaranteed to work, by definition. This makes it hard or impossible to program a portable fail-safe option (non-portable solutions are possible for some constructs). Current compiler development usually evaluates and compares compiler performance with benchmarks designed around micro-optimizations, even on platforms that are mostly used on the general-purpose desktop and laptop market (such as amd64). Therefore, undefined behavior provides ample room for compiler performance improvement, as the source code for a specific source code statement is allowed to be mapped to anything at runtime. For C and C++, the compiler is allowed to give a compile-time diagnostic in these cases, but is not required to: the implementation will be considered correct whatever it does in such cases, analogous todon't-care termsin digital logic. It is the responsibility of the programmer to write code that never invokes undefined behavior, although compiler implementations are allowed to issue diagnostics when this happens. Compilers nowadays have flags that enable such diagnostics, for example,-fsanitize=undefinedenables the "undefined behavior sanitizer" (UBSan) ingcc4.9[3]and inclang. However, this flag is not the default and enabling it is a choice of the person who builds the code. Under some circumstances there can be specific restrictions on undefined behavior. For example, theinstruction setspecifications of aCPUmight leave the behavior of some forms of an instruction undefined, but if the CPU supportsmemory protectionthen the specification will probably include a blanket rule stating that no user-accessible instruction may cause a hole in theoperating system's security; so an actual CPU would be permitted to corrupt user registers in response to such an instruction, but would not be allowed to, for example, switch intosupervisor mode. The runtimeplatformcan also provide some restrictions or guarantees on undefined behavior, if thetoolchainor theruntimeexplicitly document that specific constructs found in thesource codeare mapped to specific well-defined mechanisms available at runtime. For example, aninterpretermay document a particular behavior for some operations that are undefined in the language specification, while other interpreters or compilers for the same language may not. Acompilerproducesexecutable codefor a specificABI, filling thesemantic gapin ways that depend on the compiler version: the documentation for that compiler version and the ABI specification can provide restrictions on undefined behavior. Relying on these implementation details makes the software non-portable, but portability may not be a concern if the software is not supposed to be used outside of a specific runtime. Undefined behavior can result in a program crash or even in failures that are harder to detect and make the program look like it is working normally, such as silent loss of data and production of incorrect results. Documenting an operation as undefined behavior allows compilers to assume that this operation will never happen in a conforming program. This gives the compiler more information about the code and this information can lead to more optimization opportunities. An example for the C language: The value ofxcannot be negative and, given that signedinteger overflowis undefined behavior in C, the compiler can assume thatvalue < 2147483600will always be false. Thus theifstatement, including the call to the functionbar, can be ignored by the compiler since the test expression in theifhas noside effectsand its condition will never be satisfied. The code is therefore semantically equivalent to: Had the compiler been forced to assume that signed integer overflow haswraparoundbehavior, then the transformation above would not have been legal. Such optimizations become hard to spot by humans when the code is more complex and other optimizations, likeinlining, take place. For example, another function may call the above function: The compiler is free to optimize away thewhile-loop here by applyingvalue range analysis: by inspectingfoo(), it knows that the initial value pointed to byptrxcannot possibly exceed 47 (as any more would trigger undefined behavior infoo()); therefore, the initial check of*ptrx > 60will always be false in a conforming program. Going further, since the resultzis now never used andfoo()has no side effects, the compiler can optimizerun_tasks()to be an empty function that returns immediately. The disappearance of thewhile-loop may be especially surprising iffoo()is defined in aseparately compiled object file. Another benefit from allowing signed integer overflow to be undefined is that it makes it possible to store and manipulate a variable's value in aprocessor registerthat is larger than the size of the variable in the source code. For example, if the type of a variable as specified in the source code is narrower than the native register width (such asinton a64-bitmachine, a common scenario), then the compiler can safely use a signed 64-bit integer for the variable in themachine codeit produces, without changing the defined behavior of the code. If a program depended on the behavior of a 32-bit integer overflow, then a compiler would have to insert additional logic when compiling for a 64-bit machine, because the overflow behavior of most machine instructions depends on the register width.[4] Undefined behavior also allows more compile-time checks by both compilers andstatic program analysis.[citation needed] C and C++ standards have several forms of undefined behavior throughout, which offer increased liberty in compiler implementations and compile-time checks at the expense of undefined run-time behavior if present. In particular, theISOstandard for C has an appendix listing common sources of undefined behavior.[5]Moreover, compilers are not required to diagnose code that relies on undefined behavior. Hence, it is common for programmers, even experienced ones, to rely on undefined behavior either by mistake, or simply because they are not well-versed in the rules of the language that can span hundreds of pages. This can result in bugs that are exposed when a different compiler, or different settings, are used. Testing orfuzzingwith dynamic undefined behavior checks enabled, e.g., theClangsanitizers, can help to catch undefined behavior not diagnosed by the compiler or static analyzers.[6] Undefined behavior can lead tosecurityvulnerabilities in software. For example, buffer overflows and other security vulnerabilities in the majorweb browsersare due to undefined behavior. WhenGCC's developers changed their compiler in 2008 such that it omitted certain overflow checks that relied on undefined behavior,CERTissued a warning against the newer versions of the compiler.[7]Linux Weekly Newspointed out that the same behavior was observed inPathScale C,Microsoft Visual C++ 2005and several other compilers;[8]the warning was later amended to warn about various compilers.[9] The major forms of undefined behavior in C can be broadly classified as:[10]spatial memory safety violations, temporal memory safety violations,integer overflow, strict aliasing violations, alignment violations, unsequenced modifications, data races, and loops that neither perform I/O nor terminate. In C the use of anyautomatic variablebefore it has been initialized yields undefined behavior, as does integerdivision by zero, signed integer overflow, indexing an array outside of its defined bounds (seebuffer overflow), ornull pointerdereferencing. In general, any instance of undefined behavior leaves the abstract execution machine in an unknown state, and causes the behavior of the entire program to be undefined. Attempting to modify astring literalcauses undefined behavior:[11] Integerdivision by zeroresults in undefined behavior:[12] Certain pointer operations may result in undefined behavior:[13] In C and C++, the relational comparison ofpointersto objects (for less-than or greater-than comparison) is only strictly defined if the pointers point to members of the same object, or elements of the samearray.[14]Example: Reaching the end of a value-returning function (other thanmain()) without a return statement results in undefined behavior if the value of the function call is used by the caller:[15] Modifying an object between twosequence pointsmore than once produces undefined behavior.[16]There are considerable changes in what causes undefined behavior in relation to sequence points as of C++11.[17]Modern compilers can emit warnings when they encounter multiple unsequenced modifications to the same object.[18][19]The following example will cause undefined behavior in both C and C++. When modifying an object between two sequence points, reading the value of the object for any other purpose than determining the value to be stored is also undefined behavior.[20] In C/C++bitwise shiftinga value by a number of bits which is either a negative number or is greater than or equal to the total number of bits in this value results in undefined behavior. The safest way (regardless of compiler vendor) is to always keep the number of bits to shift (the right operand of the<<and>>bitwise operators) within the range: [0,sizeofvalue * CHAR_BIT - 1] (wherevalueis the left operand). While undefined behavior is never present in safeRust, it is possible to invoke undefined behavior in unsafe Rust in many ways.[21]For example, creating an invalid reference (a reference which does not refer to a valid value) invokes immediate undefined behavior: It is not necessary to use the reference; undefined behavior is invoked merely from the creation of such a reference.
https://en.wikipedia.org/wiki/Undefined_behaviour
Incomputer programming,unspecified behavioris behavior that may vary on different implementations of aprogramming language.[clarification needed]Aprogramcan be said to contain unspecified behavior when itssource codemay produce anexecutablethat exhibits different behavior when compiled on a differentcompiler, or on the same compiler with different settings, or indeed in different parts of the same executable. While the respective language standards or specifications may impose a range of possible behaviors, the exact behavior depends on the implementation and may not be completely determined upon examination of the program's source code.[1]Unspecified behavior will often not manifest itself in the resulting program's external behavior, but it may sometimes lead to differing outputs or results, potentially causingportabilityproblems. To enable compilers to produce optimal code for their respective target platforms, programming language standards do not always impose a certain specific behavior for a given source code construct.[2]Failing to explicitly define the exact behavior of every possible program is not considered an error or weakness in the language specification, and doing so would be infeasible.[1]In theCandC++languages, such non-portableconstructs are generally grouped into three categories: Implementation-defined, unspecified, andundefined behavior.[3] The exact definition of unspecified behavior varies. In C++, it is defined as "behavior, for a well-formed program construct and correct data, that depends on the implementation."[4]The C++ Standard also notes that the range of possible behaviors is usually provided.[4]Unlike implementation-defined behavior, there is no requirement for the implementation to document its behavior.[4]Similarly, the C Standard defines it as behavior for which the standard "provides two or more possibilities and imposes no further requirements on which is chosen in any instance".[5]Unspecified behavior is different fromundefined behavior. The latter is typically a result of an erroneous program construct or data, and no requirements are placed on the translation or execution of such constructs.[6] C and C++ distinguishimplementation-defined behaviorfrom unspecified behavior. For implementation-defined behavior, the implementation must choose a particular behavior and document it. An example in C/C++ is the size of integer data types. The choice of behavior must be consistent with the documented behavior within a given execution of the program. Many programming languages do not specify the order of evaluation of the sub-expressions of a completeexpression. This non-determinism can allow optimal implementations for specific platforms e.g. to utilise parallelism. If one or more of the sub-expressions hasside effects, then the result of evaluating the full-expression may be different depending on the order of evaluation of the sub-expressions.[1]For example, given , wherefandgboth modifyb, the result stored inamay be different depending on whetherf(b)org(b)is evaluated first.[1]In the C and C++ languages, this also applies to function arguments. Example:[2] The resulting program will write its two lines of output in an unspecified order.[2]In some other languages, such asJava, the order of evaluation of operands and function arguments is explicitly defined.[7]
https://en.wikipedia.org/wiki/Unspecified_behaviour
Inartificial intelligence, with implications forcognitive science, theframe problemdescribes an issue with usingfirst-order logicto express facts about a robot in the world. Representing the state of a robot with traditional first-order logic requires the use of manyaxiomsthat simply imply that things in the environment do not change arbitrarily. For example, Hayes describes a "block world" with rules about stacking blocks together. In a first-order logic system, additional axioms are required to make inferences about the environment (for example, that a block cannot change position unless it is physically moved). The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment.[1] John McCarthyandPatrick J. Hayesdefined this problem in their 1969 article,Some Philosophical Problems from the Standpoint of Artificial Intelligence. In this paper, and many that came after, the formal mathematical problem was a starting point for more general discussions of the difficulty ofknowledge representationfor artificial intelligence. Issues such as how to provide rational default assumptions and what humans consider common sense in a virtual environment.[2] Inphilosophy, the frame problem became more broadly construed in connection with the problem of limiting the beliefs that have to be updated in response to actions. In the logical context, actions are typically specified by what they change, with the implicit assumption that everything else (the frame) remains unchanged. The frame problem occurs even in very simple domains. A scenario with a door, which can be open or closed, and a light, which can be on or off, is statically represented by twopropositionsopen{\displaystyle \mathrm {open} }andon{\displaystyle \mathrm {on} }. If these conditions can change, they are better represented by twopredicatesopen(t){\displaystyle \mathrm {open} (t)}andon(t){\displaystyle \mathrm {on} (t)}that depend on time; such predicates are calledfluents. A domain in which the door is closed and the light off at time 0, and the door opened at time 1, can be directly represented in logic[clarification needed]by the following formulae: The first two formulae represent the initial situation; the third formula represents the effect of executing the action of opening the door at time 1. If such an action had preconditions, such as the door being unlocked, it would have been represented by¬locked(0)⟹open(1){\displaystyle \neg \mathrm {locked} (0)\implies \mathrm {open} (1)}. In practice, one would have a predicateexecuteopen(t){\displaystyle \mathrm {executeopen} (t)}for specifying when an action is executed and a rule∀t.executeopen(t)⟹open(t+1){\displaystyle \forall t.\mathrm {executeopen} (t)\implies \mathrm {open} (t+1)}for specifying the effects of actions. The article on thesituation calculusgives more details. While the three formulae above are a direct expression in logic of what is known, they do not suffice to correctly draw consequences. While the following conditions (representing the expected situation) are consistent with the three formulae above, they are not the only ones. Indeed, another set of conditions that is consistent with the three formulae above is: The frame problem is that specifying only which conditions are changed by the actions does not entail that all other conditions are not changed. This problem can be solved by adding the so-called “frame axioms”, which explicitly specify that all conditions not affected by actions are not changed while executing that action. For example, since the action executed at time 0 is that of opening the door, a frame axiom would state that the status of the light does not change from time 0 to time 1: The frame problem is that one such frame axiom is necessary for every pair of action and condition such that the action does not affect the condition.[clarification needed]In other words, the problem is that of formalizing a dynamical domain without explicitly specifying the frame axioms. The solution proposed by McCarthy to solve this problem involves assuming that a minimal amount of condition changes have occurred; this solution is formalized using the framework ofcircumscription. TheYale shooting problem, however, shows that this solution is not always correct. Alternative solutions were then proposed, involving predicate completion, fluent occlusion,successor state axioms, etc.; they are explained below. By the end of the 1980s, the frame problem as defined by McCarthy and Hayes was solved[clarification needed]. Even after that, however, the term “frame problem” was still used, in part to refer to the same problem but under different settings (e.g., concurrent actions), and in part to refer to the general problem of representing and reasoning with dynamical domains. The following solutions depict how the frame problem is solved in various formalisms. The formalisms themselves are not presented in full: what is presented are simplified versions that are sufficient to explain the full solution. This solution was proposed byErik Sandewall, who also defined aformal languagefor the specification of dynamical domains; therefore, such a domain can be first expressed in this language and then automatically translated into logic. In this article, only the expression in logic is shown, and only in the simplified language with no action names. The rationale of this solution is to represent not only the value of conditions over time, but also whether they can be affected by the last executed action. The latter is represented by another condition, called occlusion. A condition is said to beoccludedin a given time point if an action has been just executed that makes the condition true or false as an effect. Occlusion can be viewed as “permission to change”: if a condition is occluded, it is relieved from obeying the constraint of inertia. In the simplified example of the door and the light, occlusion can be formalized by two predicatesoccludeopen(t){\displaystyle \mathrm {occludeopen} (t)}andoccludeon(t){\displaystyle \mathrm {occludeon} (t)}. The rationale is that a condition can change value only if the corresponding occlusion predicate is true at the next time point. In turn, the occlusion predicate is true only when an action affecting the condition is executed. In general, every action making a condition true or false also makes the corresponding occlusion predicate true. In this case,occludeopen(1){\displaystyle \mathrm {occludeopen} (1)}is true, making the antecedent of the fourth formula above false fort=1{\displaystyle t=1}; therefore, the constraint thatopen(t−1)⟺open(t){\displaystyle \mathrm {open} (t-1)\iff \mathrm {open} (t)}does not hold fort=1{\displaystyle t=1}. Therefore,open{\displaystyle \mathrm {open} }can change value, which is also what is enforced by the third formula. In order for this condition to work, occlusion predicates have to be true only when they are made true as an effect of an action. This can be achieved either bycircumscriptionor by predicate completion. It is worth noticing that occlusion does not necessarily imply a change: for example, executing the action of opening the door when it was already open (in the formalization above) makes the predicateoccludeopen{\displaystyle \mathrm {occludeopen} }true and makesopen{\displaystyle \mathrm {open} }true; however,open{\displaystyle \mathrm {open} }has not changed value, as it was true already. This encoding is similar to the fluent occlusion solution, but the additional predicates denote change, not permission to change. For example,changeopen(t){\displaystyle \mathrm {changeopen} (t)}represents the fact that the predicateopen{\displaystyle \mathrm {open} }will change from timet{\displaystyle t}tot+1{\displaystyle t+1}. As a result, a predicate changes if and only if the corresponding change predicate is true. An action results in a change if and only if it makes true a condition that was previously false or vice versa. The third formula is a different way of saying that opening the door causes the door to be opened. Precisely, it states that opening the door changes the state of the door if it had been previously closed. The last two conditions state that a condition changes value at timet{\displaystyle t}if and only if the corresponding change predicate is true at timet{\displaystyle t}. To complete the solution, the time points in which the change predicates are true have to be as few as possible, and this can be done by applying predicate completion to the rules specifying the effects of actions. The value of a condition after the execution of an action can be determined by the fact that the condition is true if and only if: Asuccessor state axiomis a formalization in logic of these two facts. For example, ifopendoor(t){\displaystyle \mathrm {opendoor} (t)}andclosedoor(t){\displaystyle \mathrm {closedoor} (t)}are two conditions used to denote that the action executed at timet{\displaystyle t}was to open or close the door, respectively, the running example is encoded as follows. This solution is centered around the value of conditions, rather than the effects of actions. In other words, there is an axiom for every condition, rather than a formula for every action. Preconditions to actions (which are not present in this example) are formalized by other formulae. The successor state axioms are used in the variant to thesituation calculusproposed byRay Reiter. Thefluent calculusis a variant of the situation calculus. It solves the frame problem by using first-order logicterms, rather thanpredicates, to represent the states. Converting predicates into terms in first-order logic is calledreification; the fluent calculus can be seen as a logic in which predicates representing the state of conditions are reified. The difference between a predicate and a term in first-order logic is that a term is a representation of an object (possibly a complex object composed of other objects), while a predicate represents a condition that can be true or false when evaluated over a given set of terms. In the fluent calculus, each possible state is represented by a term obtained by composition of other terms, each one representing the conditions that are true in state. For example, the state in which the door is open and the light is on is represented by the termopen∘on{\displaystyle \mathrm {open} \circ \mathrm {on} }. It is important to notice that a term is not true or false by itself, as it is an object and not a condition. In other words, the termopen∘on{\displaystyle \mathrm {open} \circ \mathrm {on} }represent a possible state, and does not by itself mean that this is the current state. A separate condition can be stated to specify that this is actually the state at a given time, e.g.,state(open∘on,10){\displaystyle \mathrm {state} (\mathrm {open} \circ \mathrm {on} ,10)}means that this is the state at time10{\displaystyle 10}. The solution to the frame problem given in the fluent calculus is to specify the effects of actions by stating how a term representing the state changes when the action is executed. For example, the action of opening the door at time 0 is represented by the formula: The action of closing the door, which makes a condition false instead of true, is represented in a slightly different way: This formula works provided that suitable axioms are given aboutstate{\displaystyle \mathrm {state} }and∘{\displaystyle \circ }, e.g., a term containing the same condition twice is not a valid state (for example,state(open∘s∘open,t){\displaystyle \mathrm {state} (\mathrm {open} \circ s\circ \mathrm {open} ,t)}is always false for everys{\displaystyle s}andt{\displaystyle t}). Theevent calculususes terms for representing fluents, like the fluent calculus, but also has one or more axioms constraining the value of fluents, like the successor state axioms. There are many variants of the event calculus, but one of the simplest and most useful employs a single axiom to represent the law of inertia: The axiom states that a fluentF{\displaystyle F}holds at a timeT2{\displaystyle T2}, if an eventE1{\displaystyle E1}happens and initiatesF{\displaystyle F}at an earlier timeT1{\displaystyle T1}, and there is no eventE2{\displaystyle E2}that happens and terminatesF{\displaystyle F}after or at the same time asT1{\displaystyle T1}and beforeT2{\displaystyle T2}. To apply the event calculus to a particular problem domain, it is necessary to define theinitiates{\displaystyle initiates}andterminates{\displaystyle terminates}predicates for that domain. For example: To apply the event calculus to a particular problem in the domain, it is necessary to specify the events that happen in the context of the problem. For example: To solve a problem, such aswhich fluents hold at time 5?, it is necessary to pose the problem as a goal, such as: In this case, obtaining the unique solution: The event calculus solves the frame problem, eliminating undesired solutions, by using anon-monotonic logic, such as first-order logic withcircumscription[3]or by treating the event calculus as alogic programusingnegation as failure. The frame problem can be thought of as the problem of formalizing the principle that, by default, "everything is presumed to remain in the state in which it is" (Leibniz, "An Introduction to a Secret Encyclopædia",c. 1679). This default, sometimes called thecommonsense law of inertia, was expressed byRaymond Reiterindefault logic: (ifR(x){\displaystyle R(x)}is true in situations{\displaystyle s}, and it can be assumed[4]thatR(x){\displaystyle R(x)}remains true after executing actiona{\displaystyle a}, then we can conclude thatR(x){\displaystyle R(x)}remains true). Steve Hanks andDrew McDermottargued, on the basis of theirYale shootingexample, that this solution to the frame problem is unsatisfactory. Hudson Turner showed, however, that it works correctly in the presence of appropriate additional postulates. The counterpart of the default logic solution in the language ofanswer set programmingis a rule withstrong negation: (ifr(X){\displaystyle r(X)}is true at timeT{\displaystyle T}, and it can be assumed thatr(X){\displaystyle r(X)}remains true at timeT+1{\displaystyle T+1}, then we can conclude thatr(X){\displaystyle r(X)}remains true). Separation logicis a formalism for reasoning about computer programs using pre/post specifications of the form{precondition}code{postcondition}{\displaystyle \{\mathrm {precondition} \}\ \mathrm {code} \ \{\mathrm {postcondition} \}}. Separation logic is an extension ofHoare logicoriented to reasoning about mutable data structures in computer memory and other dynamic resources, and it has a special connective *, pronounced "and separately", to support independent reasoning about disjoint memory regions.[5][6] Separation logic employs atightinterpretation of pre/post specs, which say that the code canonlyaccess memory locations guaranteed to exist by the precondition.[7]This leads to the soundness of the most important inference rule of the logic, theframe rule {precondition}code{postcondition}{precondition∗frame}code{postcondition∗frame}{\displaystyle {\frac {\{\mathrm {precondition} \}\ \mathrm {code} \ \{\mathrm {postcondition} \}}{\{\mathrm {precondition} \ast \mathrm {frame} \}\ \mathrm {code} \ \{\mathrm {postcondition} \ast \mathrm {frame} \}}}} The frame rule allows descriptions of arbitrary memory outside the footprint (memory accessed) of the code to be added to a specification: this enables the initial specification to concentrate only on the footprint. For example, the inference {list⁡(x)}code{sortedlist⁡(x)}{list⁡(x)∗sortedlist⁡(y)}code{sortedlist⁡(x)∗sortedlist⁡(y)}{\displaystyle {\frac {\{\operatorname {list} (x)\}\ \mathrm {code} \ \{\operatorname {sortedlist} (x)\}}{\{\operatorname {list} (x)\ast \operatorname {sortedlist} (y)\}\ \mathrm {code} \ \{\operatorname {sortedlist} (x)\ast \operatorname {sortedlist} (y)\}}}} captures that code which sorts a listxdoes not unsort a separate listy,and it does this without mentioningyat all in the initial spec above the line. Automation of the frame rule has led to significant increases in the scalability of automated reasoning techniques for code,[8]eventually deployed industrially to codebases with tens of millions of lines.[9] There appears to be some similarity between the separation logic solution to the frame problem and that of the fluent calculus mentioned above.[further explanation needed] Action description languageselude the frame problem rather than solving it. An action description language is a formal language with a syntax that is specific for describing situations and actions. For example, that the actionopendoor{\displaystyle \mathrm {opendoor} }makes the door open if not locked is expressed by: The semantics of an action description language depends on what the language can express (concurrent actions, delayed effects, etc.) and is usually based ontransition systems. Since domains are expressed in these languages rather than directly in logic, the frame problem only arises when a specification given in an action description logic is to be translated into logic. Typically, however, a translation is given from these languages toanswer set programmingrather than first-order logic.
https://en.wikipedia.org/wiki/Frame_problem
Inmathematicsandcomputing,universal hashing(in arandomized algorithmor data structure) refers to selecting ahash functionat random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions inexpectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations ofhash tables,randomized algorithms, andcryptography. Assume we want to map keys from some universeU{\displaystyle U}intom{\displaystyle m}bins (labelled[m]={0,…,m−1}{\displaystyle [m]=\{0,\dots ,m-1\}}). The algorithm will have to handle some data setS⊆U{\displaystyle S\subseteq U}of|S|=n{\displaystyle |S|=n}keys, which is not known in advance. Usually, the goal of hashing is to obtain a low number of collisions (keys fromS{\displaystyle S}that land in the same bin). A deterministic hash function cannot offer any guarantee in an adversarial setting if|U|>m⋅n{\displaystyle |U|>m\cdot n}, since the adversary may chooseS{\displaystyle S}to be precisely thepreimageof a bin. This means that all data keys land in the same bin, making hashing useless. Furthermore, a deterministic hash function does not allow forrehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function. The solution to these problems is to pick a function randomly from a family of hash functions. A family of functionsH={h:U→[m]}{\displaystyle H=\{h:U\to [m]\}}is called auniversal familyif,∀x,y∈U,x≠y:|{h∈H:h(x)=h(y)}|≤|H|m{\displaystyle \forall x,y\in U,~x\neq y:~~|\{h\in H:h(x)=h(y)\}|\leq {\frac {|H|}{m}}}. In other words, any two different keys of the universe collide with probability at most1/m{\displaystyle 1/m}when the hash functionh{\displaystyle h}is drawn uniformly at random fromH{\displaystyle H}. This is exactly the probability of collision we would expect if the hash function assigned truly random hash codes to every key. Sometimes, the definition is relaxed by a constant factor, only requiring collision probabilityO(1/m){\displaystyle O(1/m)}rather than≤1/m{\displaystyle \leq 1/m}. This concept was introduced by Carter and Wegman[1]in 1977, and has found numerous applications in computer science (see, forexample[2]). If we have an upper bound ofϵ<1{\displaystyle \epsilon <1}on the collision probability, we say that we haveϵ{\displaystyle \epsilon }-almost universality. So for example, a universal family has1/m{\displaystyle 1/m}-almost universality. Many, but not all, universal families have the following strongeruniform difference property: Note that the definition of universality is only concerned with whetherh(x)−h(y)=0{\displaystyle h(x)-h(y)=0}, which counts collisions. The uniform difference property is stronger. (Similarly, a universal family can be XOR universal if∀x,y∈U,x≠y{\displaystyle \forall x,y\in U,~x\neq y}, the valueh(x)⊕h(y)modm{\displaystyle h(x)\oplus h(y)~{\bmod {~}}m}is uniformly distributed in[m]{\displaystyle [m]}where⊕{\displaystyle \oplus }is the bitwise exclusive or operation. This is only possible ifm{\displaystyle m}is a power of two.) An even stronger condition ispairwise independence: we have this property when∀x,y∈U,x≠y{\displaystyle \forall x,y\in U,~x\neq y}we have the probability thatx,y{\displaystyle x,y}will hash to any pair of hash valuesz1,z2{\displaystyle z_{1},z_{2}}is as if they were perfectly random:P(h(x)=z1∧h(y)=z2)=1/m2{\displaystyle P(h(x)=z_{1}\land h(y)=z_{2})=1/m^{2}}. Pairwise independence is sometimes called strong universality. Another property is uniformity. We say that a family is uniform if all hash values are equally likely:P(h(x)=z)=1/m{\displaystyle P(h(x)=z)=1/m}for any hash valuez{\displaystyle z}. Universality does not imply uniformity. However, strong universality does imply uniformity. Given a family with the uniform distance property, one can produce a pairwise independent or strongly universal hash family by adding a uniformly distributed random constant with values in[m]{\displaystyle [m]}to the hash functions. (Similarly, ifm{\displaystyle m}is a power of two, we can achieve pairwise independence from an XOR universal hash family by doing an exclusive or with a uniformly distributed random constant.) Since a shift by a constant is sometimes irrelevant in applications (e.g. hash tables), a careful distinction between the uniform distance property and pairwise independent is sometimes not made.[3] For some applications (such as hash tables), it is important for the least significant bits of the hash values to be also universal. When a family is strongly universal, this is guaranteed: ifH{\displaystyle H}is a strongly universal family withm=2L{\displaystyle m=2^{L}}, then the family made of the functionshmod2L′{\displaystyle h{\bmod {2^{L'}}}}for allh∈H{\displaystyle h\in H}is also strongly universal forL′≤L{\displaystyle L'\leq L}. Unfortunately, the same is not true of (merely) universal families. For example, the family made of the identity functionh(x)=x{\displaystyle h(x)=x}is clearly universal, but the family made of the functionh(x)=xmod2L′{\displaystyle h(x)=x{\bmod {2^{L'}}}}fails to be universal. UMACandPoly1305-AESand several othermessage authentication codealgorithms are based on universal hashing.[4][5]In such applications, the software chooses a new hash function for every message, based on a unique nonce for that message. Several hash table implementations are based on universal hashing. In such applications, typically the software chooses a new hash function only after it notices that "too many" keys have collided; until then, the same hash function continues to be used over and over. (Some collision resolution schemes, such asdynamic perfect hashing, pick a new hash function every time there is a collision. Other collision resolution schemes, such ascuckoo hashingand2-choice hashing, allow a number of collisions before picking a new hash function). A survey of fastest known universal and strongly universal hash functions for integers, vectors, and strings is found in.[6] For any fixed setS{\displaystyle S}ofn{\displaystyle n}keys, using a universal family guarantees the following properties. As the above guarantees hold for any fixed setS{\displaystyle S}, they hold if the data set is chosen by an adversary. However, the adversary has to make this choice before (or independent of) the algorithm's random choice of a hash function. If the adversary can observe the random choice of the algorithm, randomness serves no purpose, and the situation is the same as deterministic hashing. The second and third guarantee are typically used in conjunction withrehashing. For instance, a randomized algorithm may be prepared to handle someO(n){\displaystyle O(n)}number of collisions. If it observes too many collisions, it chooses another randomh{\displaystyle h}from the family and repeats. Universality guarantees that the number of repetitions is ageometric random variable. Since any computer data can be represented as one or more machine words, one generally needs hash functions for three types of domains: machine words ("integers"); fixed-length vectors of machine words; and variable-length vectors ("strings"). This section refers to the case of hashing integers that fit in machines words; thus, operations like multiplication, addition, division, etc. are cheap machine-level instructions. Let the universe to be hashed be{0,…,|U|−1}{\displaystyle \{0,\dots ,|U|-1\}}. The original proposal of Carter and Wegman[1]was to pick a primep≥|U|{\displaystyle p\geq |U|}and define wherea,b{\displaystyle a,b}are randomly chosen integers modulop{\displaystyle p}witha≠0{\displaystyle a\neq 0}. (This is a single iteration of alinear congruential generator.) To see thatH={ha,b}{\displaystyle H=\{h_{a,b}\}}is a universal family, note thath(x)=h(y){\displaystyle h(x)=h(y)}only holds when for some integeri{\displaystyle i}between0{\displaystyle 0}and(p−1)/m{\displaystyle (p-1)/m}. Sincep≥|U|{\displaystyle p\geq |U|}, ifx≠y{\displaystyle x\neq y}their differencex−y{\displaystyle x-y}is nonzero and has an inverse modulop{\displaystyle p}. Solving fora{\displaystyle a}yields There arep−1{\displaystyle p-1}possible choices fora{\displaystyle a}(sincea=0{\displaystyle a=0}is excluded) and, varyingi{\displaystyle i}in the allowed range,⌊(p−1)/m⌋{\displaystyle \lfloor (p-1)/m\rfloor }possible non-zero values for the right hand side. Thus the collision probability is Another way to seeH{\displaystyle H}is a universal family is via the notion ofstatistical distance. Write the differenceh(x)−h(y){\displaystyle h(x)-h(y)}as Sincex−y{\displaystyle x-y}is nonzero anda{\displaystyle a}is uniformly distributed in{1,…,p−1}{\displaystyle \{1,\dots ,p-1\}}, it follows thata(x−y){\displaystyle a(x-y)}modulop{\displaystyle p}is also uniformly distributed in{1,…,p−1}{\displaystyle \{1,\dots ,p-1\}}. The distribution of(h(x)−h(y))modm{\displaystyle (h(x)-h(y))~{\bmod {~}}m}is thus almost uniform, up to a difference in probability of±1/p{\displaystyle \pm 1/p}between the samples. As a result, the statistical distance to a uniform family isO(m/p){\displaystyle O(m/p)}, which becomes negligible whenp≫m{\displaystyle p\gg m}. The family of simpler hash functions is onlyapproximatelyuniversal:Pr{ha(x)=ha(y)}≤2/m{\displaystyle \Pr\{h_{a}(x)=h_{a}(y)\}\leq 2/m}for allx≠y{\displaystyle x\neq y}.[1]Moreover, this analysis is nearly tight; Carter and Wegman[1]show thatPr{ha(1)=ha(m+1)}≥2/(m+1){\displaystyle \Pr\{h_{a}(1)=h_{a}(m+1)\}\geq 2/(m+1)}whenever(p−1)modm=1{\displaystyle (p-1)~{\bmod {~}}m=1}. The state of the art for hashing integers is themultiply-shiftscheme described by Dietzfelbinger et al. in 1997.[8]By avoiding modular arithmetic, this method is much easier to implement and also runs significantly faster in practice (usually by at least a factor of four[9]). The scheme assumes the number of bins is a power of two,m=2M{\displaystyle m=2^{M}}. Letw{\displaystyle w}be the number of bits in a machine word. Then the hash functions are parametrised over odd positive integersa<2w{\displaystyle a<2^{w}}(that fit in a word ofw{\displaystyle w}bits). To evaluateha(x){\displaystyle h_{a}(x)}, multiplyx{\displaystyle x}bya{\displaystyle a}modulo2w{\displaystyle 2^{w}}and then keep the high orderM{\displaystyle M}bits as the hash code. In mathematical notation, this is This scheme doesnotsatisfy the uniform difference property and is only2/m{\displaystyle 2/m}-almost-universal; for anyx≠y{\displaystyle x\neq y},Pr{ha(x)=ha(y)}≤2/m{\displaystyle \Pr\{h_{a}(x)=h_{a}(y)\}\leq 2/m}. To understand the behavior of the hash function, notice that, ifaxmod2w{\displaystyle ax{\bmod {2}}^{w}}andaymod2w{\displaystyle ay{\bmod {2}}^{w}}have the same highest-order 'M' bits, thena(x−y)mod2w{\displaystyle a(x-y){\bmod {2}}^{w}}has either all 1's or all 0's as its highest order M bits (depending on whetheraxmod2w{\displaystyle ax{\bmod {2}}^{w}}oraymod2w{\displaystyle ay{\bmod {2}}^{w}}is larger). Assume that the least significant set bit ofx−y{\displaystyle x-y}appears on positionw−c{\displaystyle w-c}. Sincea{\displaystyle a}is a random odd integer and odd integers have inverses in theringZ2w{\displaystyle Z_{2^{w}}}, it follows thata(x−y)mod2w{\displaystyle a(x-y){\bmod {2}}^{w}}will be uniformly distributed amongw{\displaystyle w}-bit integers with the least significant set bit on positionw−c{\displaystyle w-c}. The probability that these bits are all 0's or all 1's is therefore at most2/2M=2/m{\displaystyle 2/2^{M}=2/m}. On the other hand, ifc<M{\displaystyle c<M}, then higher-order M bits ofa(x−y)mod2w{\displaystyle a(x-y){\bmod {2}}^{w}}contain both 0's and 1's, so it is certain thath(x)≠h(y){\displaystyle h(x)\neq h(y)}. Finally, ifc=M{\displaystyle c=M}then bitw−M{\displaystyle w-M}ofa(x−y)mod2w{\displaystyle a(x-y){\bmod {2}}^{w}}is 1 andha(x)=ha(y){\displaystyle h_{a}(x)=h_{a}(y)}if and only if bitsw−1,…,w−M+1{\displaystyle w-1,\ldots ,w-M+1}are also 1, which happens with probability1/2M−1=2/m{\displaystyle 1/2^{M-1}=2/m}. This analysis is tight, as can be shown with the examplex=2w−M−2{\displaystyle x=2^{w-M-2}}andy=3x{\displaystyle y=3x}. To obtain a truly 'universal' hash function, one can use the multiply-add-shift scheme that picks higher-order bits wherea{\displaystyle a}is a random positive integer witha<22w{\displaystyle a<2^{2w}}andb{\displaystyle b}is a random non-negative integer withb<22w{\displaystyle b<2^{2w}}. This requires doing arithmetic on2w{\displaystyle 2w}-bit unsigned integers. This version of multiply-shift is due to Dietzfelbinger, and was later analyzed more precisely by Woelfel.[10] This section is concerned with hashing a fixed-length vector of machine words. Interpret the input as a vectorx¯=(x0,…,xk−1){\displaystyle {\bar {x}}=(x_{0},\dots ,x_{k-1})}ofk{\displaystyle k}machine words (integers ofw{\displaystyle w}bits each). IfH{\displaystyle H}is a universal family with the uniform difference property, the following family (dating back to Carter and Wegman[1]) also has the uniform difference property (and hence is universal): Ifm{\displaystyle m}is a power of two, one may replace summation by exclusive or.[11] In practice, if double-precision arithmetic is available, this is instantiated with the multiply-shift hash family of hash functions.[12]Initialize the hash function with a vectora¯=(a0,…,ak−1){\displaystyle {\bar {a}}=(a_{0},\dots ,a_{k-1})}of randomoddintegers on2w{\displaystyle 2w}bits each. Then if the number of bins ism=2M{\displaystyle m=2^{M}}forM≤w{\displaystyle M\leq w}: It is possible to halve the number of multiplications, which roughly translates to a two-fold speed-up in practice.[11]Initialize the hash function with a vectora¯=(a0,…,ak−1){\displaystyle {\bar {a}}=(a_{0},\dots ,a_{k-1})}of randomoddintegers on2w{\displaystyle 2w}bits each. The following hash family is universal:[13] If double-precision operations are not available, one can interpret the input as a vector of half-words (w/2{\displaystyle w/2}-bit integers). The algorithm will then use⌈k/2⌉{\displaystyle \lceil k/2\rceil }multiplications, wherek{\displaystyle k}was the number of half-words in the vector. Thus, the algorithm runs at a "rate" of one multiplication per word of input. The same scheme can also be used for hashing integers, by interpreting their bits as vectors of bytes. In this variant, the vector technique is known astabulation hashingand it provides a practical alternative to multiplication-based universal hashing schemes.[14] Strong universality at high speed is also possible.[15]Initialize the hash function with a vectora¯=(a0,…,ak){\displaystyle {\bar {a}}=(a_{0},\dots ,a_{k})}of random integers on2w{\displaystyle 2w}bits. Compute The result is strongly universal onw{\displaystyle w}bits. Experimentally, it was found to run at 0.2 CPU cycle per byte on recent Intel processors forw=32{\displaystyle w=32}. This refers to hashing avariable-sizedvector of machine words. If the length of the string can be bounded by a small number, it is best to use the vector solution from above (conceptually padding the vector with zeros up to the upper bound). The space required is the maximal length of the string, but the time to evaluateh(s){\displaystyle h(s)}is just the length ofs{\displaystyle s}. As long as zeroes are forbidden in the string, the zero-padding can be ignored when evaluating the hash function without affecting universality.[11]Note that if zeroes are allowed in the string, then it might be best to append a fictitious non-zero (e.g., 1) character to all strings prior to padding: this will ensure that universality is not affected.[15] Now assume we want to hashx¯=(x0,…,xℓ){\displaystyle {\bar {x}}=(x_{0},\dots ,x_{\ell })}, where a good bound onℓ{\displaystyle \ell }is not known a priori. A universal family proposed by[12]treats the stringx{\displaystyle x}as the coefficients of a polynomial modulo a large prime. Ifxi∈[u]{\displaystyle x_{i}\in [u]}, letp≥max{u,m}{\displaystyle p\geq \max\{u,m\}}be a prime and define: Using properties of modular arithmetic, above can be computed without producing large numbers for large strings as follows:[16] ThisRabin-Karp rolling hashis based on alinear congruential generator.[17]Above algorithm is also known asMultiplicative hash function.[18]In practice, themodoperator and the parameterpcan be avoided altogether by simply allowing integer to overflow because it is equivalent tomod(Max-Int-Value+ 1) in many programming languages. Below table shows values chosen to initializehand a for some of the popular implementations. Consider two stringsx¯,y¯{\displaystyle {\bar {x}},{\bar {y}}}and letℓ{\displaystyle \ell }be length of the longer one; for the analysis, the shorter string is conceptually padded with zeros up to lengthℓ{\displaystyle \ell }. A collision before applyinghint{\displaystyle h_{\mathrm {int} }}implies thata{\displaystyle a}is a root of the polynomial with coefficientsx¯−y¯{\displaystyle {\bar {x}}-{\bar {y}}}. This polynomial has at mostℓ{\displaystyle \ell }roots modulop{\displaystyle p}, so the collision probability is at mostℓ/p{\displaystyle \ell /p}. The probability of collision through the randomhint{\displaystyle h_{\mathrm {int} }}brings the total collision probability to1m+ℓp{\displaystyle {\frac {1}{m}}+{\frac {\ell }{p}}}. Thus, if the primep{\displaystyle p}is sufficiently large compared to the length of strings hashed, the family is very close to universal (instatistical distance). Other universal families of hash functions used to hash unknown-length strings to fixed-length hash values include theRabin fingerprintand theBuzhash. To mitigate the computational penalty of modular arithmetic, three tricks are used in practice:[11]
https://en.wikipedia.org/wiki/Universal_hashing
Themin-entropy, ininformation theory, is the smallest of theRényi familyof entropies, corresponding to themost conservativeway of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of themost likelyoutcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary orShannon entropy(which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley ormax-entropy, defined as the logarithm of thenumberof outcomes with nonzero probability. As with the classical Shannon entropy and its quantum generalization, thevon Neumann entropy, one can define a conditional version of min-entropy. The conditional quantum min-entropy is a one-shot, or conservative, analog ofconditional quantum entropy. To interpret a conditional information measure, suppose Alice and Bob were to share a bipartite quantum stateρAB{\displaystyle \rho _{AB}}. Alice has access to systemA{\displaystyle A}and Bob to systemB{\displaystyle B}. The conditional entropy measures the average uncertainty Bob has about Alice's state upon sampling from his own system. The min-entropy can be interpreted as the distance of a state from a maximally entangled state. This concept is useful in quantum cryptography, in the context ofprivacy amplification(See for example[1]). IfP=(p1,...,pn){\displaystyle P=(p_{1},...,p_{n})}is a classical finite probability distribution, its min-entropy can be defined as[2]Hmin(P)=log⁡1Pmax,Pmax≡maxipi.{\displaystyle H_{\rm {min}}({\boldsymbol {P}})=\log {\frac {1}{P_{\rm {max}}}},\qquad P_{\rm {max}}\equiv \max _{i}p_{i}.}One way to justify the name of the quantity is to compare it with the more standard definition of entropy, which readsH(P)=∑ipilog⁡(1/pi){\displaystyle H({\boldsymbol {P}})=\sum _{i}p_{i}\log(1/p_{i})}, and can thus be written concisely as the expectation value oflog⁡(1/pi){\displaystyle \log(1/p_{i})}over the distribution. If instead of taking the expectation value of this quantity we take its minimum value, we get precisely the above definition ofHmin(P){\displaystyle H_{\rm {min}}({\boldsymbol {P}})}. From an operational perspective, the min-entropy equals the negative logarithm of the probability of successfully guessing the outcome of a random draw fromP{\displaystyle P}. This is because it is optimal to guess the element with the largest probability and the chance of success equals the probability of that element. A natural way to generalize "min-entropy" from classical to quantum states is to leverage the simple observation that quantum states define classical probability distributions when measured in some basis. There is however the added difficulty that a single quantum state can result in infinitely many possible probability distributions, depending on how it is measured. A natural path is then, given a quantum stateρ{\displaystyle \rho }, to still defineHmin(ρ){\displaystyle H_{\rm {min}}(\rho )}aslog⁡(1/Pmax){\displaystyle \log(1/P_{\rm {max}})}, but this time definingPmax{\displaystyle P_{\rm {max}}}as the maximum possible probability that can be obtained measuringρ{\displaystyle \rho }, maximizing over all possible projective measurements. Using this, one gets the operational definition that the min-entropy ofρ{\displaystyle \rho }equals the negative logarithm of the probability of successfully guessing the outcome of any measurement ofρ{\displaystyle \rho }. Formally, this leads to the definitionHmin(ρ)=maxΠlog⁡1maxitr⁡(Πiρ)=−maxΠlog⁡maxitr⁡(Πiρ),{\displaystyle H_{\rm {min}}(\rho )=\max _{\Pi }\log {\frac {1}{\max _{i}\operatorname {tr} (\Pi _{i}\rho )}}=-\max _{\Pi }\log \max _{i}\operatorname {tr} (\Pi _{i}\rho ),}where we are maximizing over the set of all projective measurementsΠ=(Πi)i{\displaystyle \Pi =(\Pi _{i})_{i}},Πi{\displaystyle \Pi _{i}}represent the measurement outcomes in thePOVMformalism, andtr⁡(Πiρ){\displaystyle \operatorname {tr} (\Pi _{i}\rho )}is therefore the probability of observing thei{\displaystyle i}-th outcome when the measurement isΠ{\displaystyle \Pi }. A more concise method to write the double maximization is to observe that any element of any POVM is a Hermitian operator such that0≤Π≤I{\displaystyle 0\leq \Pi \leq I}, and thus we can equivalently directly maximize over these to getHmin(ρ)=−max0≤Π≤Ilog⁡tr⁡(Πρ).{\displaystyle H_{\rm {min}}(\rho )=-\max _{0\leq \Pi \leq I}\log \operatorname {tr} (\Pi \rho ).}In fact, this maximization can be performed explicitly and the maximum is obtained whenΠ{\displaystyle \Pi }is the projection onto (any of) the largest eigenvalue(s) ofρ{\displaystyle \rho }. We thus get yet another expression for the min-entropy as:Hmin(ρ)=−log⁡‖ρ‖op,{\displaystyle H_{\rm {min}}(\rho )=-\log \|\rho \|_{\rm {op}},}remembering that the operator norm of a Hermitian positive semidefinite operator equals its largest eigenvalue. LetρAB{\displaystyle \rho _{AB}}be a bipartite density operator on the spaceHA⊗HB{\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}}. The min-entropy ofA{\displaystyle A}conditioned onB{\displaystyle B}is defined to be Hmin(A|B)ρ≡−infσBDmax(ρAB‖IA⊗σB){\displaystyle H_{\min }(A|B)_{\rho }\equiv -\inf _{\sigma _{B}}D_{\max }(\rho _{AB}\|I_{A}\otimes \sigma _{B})} where the infimum ranges over all density operatorsσB{\displaystyle \sigma _{B}}on the spaceHB{\displaystyle {\mathcal {H}}_{B}}. The measureDmax{\displaystyle D_{\max }}is the maximum relative entropy defined as Dmax(ρ‖σ)=infλ{λ:ρ≤2λσ}{\displaystyle D_{\max }(\rho \|\sigma )=\inf _{\lambda }\{\lambda :\rho \leq 2^{\lambda }\sigma \}} The smooth min-entropy is defined in terms of the min-entropy. Hminϵ(A|B)ρ=supρ′Hmin(A|B)ρ′{\displaystyle H_{\min }^{\epsilon }(A|B)_{\rho }=\sup _{\rho '}H_{\min }(A|B)_{\rho '}} where the sup and inf range over density operatorsρAB′{\displaystyle \rho '_{AB}}which areϵ{\displaystyle \epsilon }-close toρAB{\displaystyle \rho _{AB}}. This measure ofϵ{\displaystyle \epsilon }-close is defined in terms of the purified distance P(ρ,σ)=1−F(ρ,σ)2{\displaystyle P(\rho ,\sigma )={\sqrt {1-F(\rho ,\sigma )^{2}}}} whereF(ρ,σ){\displaystyle F(\rho ,\sigma )}is thefidelitymeasure. These quantities can be seen as generalizations of thevon Neumann entropy. Indeed, the von Neumann entropy can be expressed as S(A|B)ρ=limϵ→0limn→∞1nHminϵ(An|Bn)ρ⊗n.{\displaystyle S(A|B)_{\rho }=\lim _{\epsilon \to 0}\lim _{n\to \infty }{\frac {1}{n}}H_{\min }^{\epsilon }(A^{n}|B^{n})_{\rho ^{\otimes n}}~.}This is called the fully quantum asymptotic equipartition theorem.[3]The smoothed entropies share many interesting properties with the von Neumann entropy. For example, the smooth min-entropy satisfy a data-processing inequality:[4]Hminϵ(A|B)ρ≥Hminϵ(A|BC)ρ.{\displaystyle H_{\min }^{\epsilon }(A|B)_{\rho }\geq H_{\min }^{\epsilon }(A|BC)_{\rho }~.} Henceforth, we shall drop the subscriptρ{\displaystyle \rho }from the min-entropy when it is obvious from the context on what state it is evaluated. Suppose an agent had access to a quantum systemB{\displaystyle B}whose stateρBx{\displaystyle \rho _{B}^{x}}depends on some classical variableX{\displaystyle X}. Furthermore, suppose that each of its elementsx{\displaystyle x}is distributed according to some distributionPX(x){\displaystyle P_{X}(x)}. This can be described by the following state over the systemXB{\displaystyle XB}. ρXB=∑xPX(x)|x⟩⟨x|⊗ρBx,{\displaystyle \rho _{XB}=\sum _{x}P_{X}(x)|x\rangle \langle x|\otimes \rho _{B}^{x},} where{|x⟩}{\displaystyle \{|x\rangle \}}form an orthonormal basis. We would like to know what the agent can learn about the classical variablex{\displaystyle x}. Letpg(X|B){\displaystyle p_{g}(X|B)}be the probability that the agent guessesX{\displaystyle X}when using an optimal measurement strategy pg(X|B)=∑xPX(x)tr⁡(ExρBx),{\displaystyle p_{g}(X|B)=\sum _{x}P_{X}(x)\operatorname {tr} (E_{x}\rho _{B}^{x}),} whereEx{\displaystyle E_{x}}is the POVM that maximizes this expression. It can be shown[5]that this optimum can be expressed in terms of the min-entropy as pg(X|B)=2−Hmin(X|B).{\displaystyle p_{g}(X|B)=2^{-H_{\min }(X|B)}~.} If the stateρXB{\displaystyle \rho _{XB}}is a product state i.e.ρXB=σX⊗τB{\displaystyle \rho _{XB}=\sigma _{X}\otimes \tau _{B}}for some density operatorsσX{\displaystyle \sigma _{X}}andτB{\displaystyle \tau _{B}}, then there is no correlation between the systemsX{\displaystyle X}andB{\displaystyle B}. In this case, it turns out that2−Hmin(X|B)=maxxPX(x).{\displaystyle 2^{-H_{\min }(X|B)}=\max _{x}P_{X}(x)~.} Since the conditional min-entropy is always smaller than the conditional Von Neumann entropy, it follows thatpg(X|B)≥2−S(A|B)ρ.{\displaystyle p_{g}(X|B)\geq 2^{-S(A|B)_{\rho }}~.} The maximally entangled state|ϕ+⟩{\displaystyle |\phi ^{+}\rangle }on a bipartite systemHA⊗HB{\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}}is defined as |ϕ+⟩AB=1d∑xA,xB|xA⟩|xB⟩{\displaystyle |\phi ^{+}\rangle _{AB}={\frac {1}{\sqrt {d}}}\sum _{x_{A},x_{B}}|x_{A}\rangle |x_{B}\rangle } where{|xA⟩}{\displaystyle \{|x_{A}\rangle \}}and{|xB⟩}{\displaystyle \{|x_{B}\rangle \}}form an orthonormal basis for the spacesA{\displaystyle A}andB{\displaystyle B}respectively. For a bipartite quantum stateρAB{\displaystyle \rho _{AB}}, we define the maximum overlap with the maximally entangled state as qc(A|B)=dAmaxEF((IA⊗E)ρAB,|ϕ+⟩⟨ϕ+|)2{\displaystyle q_{c}(A|B)=d_{A}\max _{\mathcal {E}}F\left((I_{A}\otimes {\mathcal {E}})\rho _{AB},|\phi ^{+}\rangle \langle \phi ^{+}|\right)^{2}} where the maximum is over all CPTP operationsE{\displaystyle {\mathcal {E}}}anddA{\displaystyle d_{A}}is the dimension of subsystemA{\displaystyle A}. This is a measure of how correlated the stateρAB{\displaystyle \rho _{AB}}is. It can be shown thatqc(A|B)=2−Hmin(A|B){\displaystyle q_{c}(A|B)=2^{-H_{\min }(A|B)}}. If the information contained inA{\displaystyle A}is classical, this reduces to the expression above for the guessing probability. The proof is from a paper by König, Schaffner, Renner in 2008.[6]It involves the machinery ofsemidefinite programs.[7]Suppose we are given some bipartite density operatorρAB{\displaystyle \rho _{AB}}. From the definition of the min-entropy, we have Hmin(A|B)=−infσBinfλ{λ|ρAB≤2λ(IA⊗σB)}.{\displaystyle H_{\min }(A|B)=-\inf _{\sigma _{B}}\inf _{\lambda }\{\lambda |\rho _{AB}\leq 2^{\lambda }(I_{A}\otimes \sigma _{B})\}~.} This can be re-written as −log⁡infσBTr⁡(σB){\displaystyle -\log \inf _{\sigma _{B}}\operatorname {Tr} (\sigma _{B})} subject to the conditions σB≥0,IA⊗σB≥ρAB.{\displaystyle {\begin{aligned}\sigma _{B}&\geq 0,\\I_{A}\otimes \sigma _{B}&\geq \rho _{AB}~.\end{aligned}}} We notice that the infimum is taken over compact sets and hence can be replaced by a minimum. This can then be expressed succinctly as a semidefinite program. Consider the primal problem {min:Tr⁡(σB)subject to:IA⊗σB≥ρABσB≥0.{\displaystyle {\begin{cases}{\text{min:}}\operatorname {Tr} (\sigma _{B})\\{\text{subject to: }}I_{A}\otimes \sigma _{B}\geq \rho _{AB}\\\sigma _{B}\geq 0~.\end{cases}}} This primal problem can also be fully specified by the matrices(ρAB,IB,Tr∗){\displaystyle (\rho _{AB},I_{B},\operatorname {Tr} ^{*})}whereTr∗{\displaystyle \operatorname {Tr} ^{*}}is the adjoint of the partial trace overA{\displaystyle A}. The action ofTr∗{\displaystyle \operatorname {Tr} ^{*}}on operators onB{\displaystyle B}can be written as Tr∗⁡(X)=IA⊗X.{\displaystyle \operatorname {Tr} ^{*}(X)=I_{A}\otimes X~.} We can express the dual problem as a maximization over operatorsEAB{\displaystyle E_{AB}}on the spaceAB{\displaystyle AB}as {max:Tr⁡(ρABEAB)subject to:TrA⁡(EAB)=IBEAB≥0.{\displaystyle {\begin{cases}{\text{max:}}\operatorname {Tr} (\rho _{AB}E_{AB})\\{\text{subject to: }}\operatorname {Tr} _{A}(E_{AB})=I_{B}\\E_{AB}\geq 0~.\end{cases}}} Using theChoi–Jamiołkowski isomorphism, we can define the channelE{\displaystyle {\mathcal {E}}}such thatdAIA⊗E†(|ϕ+⟩⟨ϕ+|)=EAB{\displaystyle d_{A}I_{A}\otimes {\mathcal {E}}^{\dagger }(|\phi ^{+}\rangle \langle \phi ^{+}|)=E_{AB}}where the bell state is defined over the spaceAA′{\displaystyle AA'}. This means that we can express the objective function of the dual problem as ⟨ρAB,EAB⟩=dA⟨ρAB,IA⊗E†(|ϕ+⟩⟨ϕ+|)⟩=dA⟨IA⊗E(ρAB),|ϕ+⟩⟨ϕ+|)⟩{\displaystyle {\begin{aligned}\langle \rho _{AB},E_{AB}\rangle &=d_{A}\langle \rho _{AB},I_{A}\otimes {\mathcal {E}}^{\dagger }(|\phi ^{+}\rangle \langle \phi ^{+}|)\rangle \\&=d_{A}\langle I_{A}\otimes {\mathcal {E}}(\rho _{AB}),|\phi ^{+}\rangle \langle \phi ^{+}|)\rangle \end{aligned}}}as desired. Notice that in the event that the systemA{\displaystyle A}is a partly classical state as above, then the quantity that we are after reduces tomaxPX(x)⟨x|E(ρBx)|x⟩.{\displaystyle \max P_{X}(x)\langle x|{\mathcal {E}}(\rho _{B}^{x})|x\rangle ~.}We can interpretE{\displaystyle {\mathcal {E}}}as a guessing strategy and this then reduces to the interpretation given above where an adversary wants to find the stringx{\displaystyle x}given access to quantum information via systemB{\displaystyle B}.
https://en.wikipedia.org/wiki/Min-entropy
Ininformation theory, theRényi entropyis a quantity that generalizes various notions ofentropy, includingHartley entropy,Shannon entropy,collision entropy, andmin-entropy. The Rényi entropy is named afterAlfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events.[1][2]In the context offractal dimensionestimation, the Rényi entropy forms the basis of the concept ofgeneralized dimensions.[3] The Rényi entropy is important in ecology and statistics asindex of diversity. The Rényi entropy is also important inquantum information, where it can be used as a measure ofentanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function ofαcan be calculated explicitly because it is anautomorphic functionwith respect to a particular subgroup of themodular group.[4][5]Intheoretical computer science, the min-entropy is used in the context ofrandomness extractors. The Rényi entropy of order⁠α{\displaystyle \alpha }⁠, where0<α<∞{\displaystyle 0<\alpha <\infty }and⁠α≠1{\displaystyle \alpha \neq 1}⁠, is defined as[1]Hα(X)=11−αlog⁡(∑i=1npiα).{\displaystyle \mathrm {H} _{\alpha }(X)={\frac {1}{1-\alpha }}\log \left(\sum _{i=1}^{n}p_{i}^{\alpha }\right).}It is further defined atα=0,1,∞{\displaystyle \alpha =0,1,\infty }asHα(X)=limγ→αHγ(X).{\displaystyle \mathrm {H} _{\alpha }(X)=\lim _{\gamma \to \alpha }\mathrm {H} _{\gamma }(X).} Here,X{\displaystyle X}is adiscrete random variablewith possible outcomes in the setA={x1,x2,...,xn}{\displaystyle {\mathcal {A}}=\{x_{1},x_{2},...,x_{n}\}}and corresponding probabilitiespi≐Pr(X=xi){\displaystyle p_{i}\doteq \Pr(X=x_{i})}for⁠i=1,…,n{\displaystyle i=1,\dots ,n}⁠. The resultingunit of informationis determined by the base of thelogarithm, e.g.shannonfor base 2, ornatfor basee. If the probabilities arepi=1/n{\displaystyle p_{i}=1/n}for all⁠i=1,…,n{\displaystyle i=1,\dots ,n}⁠, then all the Rényi entropies of the distribution are equal:⁠Hα(X)=log⁡n{\displaystyle \mathrm {H} _{\alpha }(X)=\log n}⁠. In general, for all discrete random variables⁠X{\displaystyle X}⁠,Hα(X){\displaystyle \mathrm {H} _{\alpha }(X)}is a non-increasing function in⁠α{\displaystyle \alpha }⁠. Applications often exploit the following relation between the Rényi entropy and theα-normof the vector of probabilities:Hα(X)=α1−αlog⁡(‖P‖α).{\displaystyle \mathrm {H} _{\alpha }(X)={\frac {\alpha }{1-\alpha }}\log \left({\left\|P\right\|}_{\alpha }\right).}Here, the discrete probability distributionP=(p1,…,pn){\displaystyle P=(p_{1},\dots ,p_{n})}is interpreted as a vector inRn{\displaystyle \mathbb {R} ^{n}}withpi≥0{\displaystyle p_{i}\geq 0}and∑i=1npi=1{\textstyle \sum _{i=1}^{n}p_{i}=1}. The Rényi entropy for anyα≥0{\displaystyle \alpha \geq 0}isSchur concave. Proven by theSchur–Ostrowski criterion. Asα{\displaystyle \alpha }approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for⁠α→0{\displaystyle \alpha \to 0}⁠, the Rényi entropy is just the logarithm of the size of thesupportofX. The limit forα→1{\displaystyle \alpha \to 1}is theShannon entropy. Asα{\displaystyle \alpha }approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability. H0(X){\displaystyle \mathrm {H} _{0}(X)}islog⁡n{\displaystyle \log n}wheren{\displaystyle n}is the number of non-zero probabilities.[6]If the probabilities are all nonzero, it is simply the logarithm of thecardinalityof the alphabet (⁠A{\displaystyle {\mathcal {A}}}⁠) of⁠X{\displaystyle X}⁠, sometimes called theHartley entropyof⁠X{\displaystyle X}⁠,H0(X)=log⁡n=log⁡|A|{\displaystyle \mathrm {H} _{0}(X)=\log n=\log |{\mathcal {A}}|\,} The limiting value ofHα{\displaystyle \mathrm {H} _{\alpha }}asα→1{\displaystyle \alpha \to 1}is theShannon entropy:[7]H1(X)≡limα→1Hα(X)=−∑i=1npilog⁡pi{\displaystyle \mathrm {H} _{1}(X)\equiv \lim _{\alpha \to 1}\mathrm {H} _{\alpha }(X)=-\sum _{i=1}^{n}p_{i}\log p_{i}} Collision entropy, sometimes just called "Rényi entropy", refers to the case⁠α=2{\displaystyle \alpha =2}⁠,H2(X)=−log⁡∑i=1npi2=−log⁡P(X=Y),{\displaystyle \mathrm {H} _{2}(X)=-\log \sum _{i=1}^{n}p_{i}^{2}=-\log P(X=Y),}whereX{\displaystyle X}andY{\displaystyle Y}areindependent and identically distributed. The collision entropy is related to theindex of coincidence. It is the negative logarithm of theSimpson diversity index. In the limit as⁠α→∞{\displaystyle \alpha \rightarrow \infty }⁠, the Rényi entropyHα{\displaystyle \mathrm {H} _{\alpha }}converges to themin-entropy⁠H∞{\displaystyle \mathrm {H} _{\infty }}⁠:H∞(X)≐mini(−log⁡pi)=−(maxilog⁡pi)=−log⁡maxipi.{\displaystyle \mathrm {H} _{\infty }(X)\doteq \min _{i}(-\log p_{i})=-(\max _{i}\log p_{i})=-\log \max _{i}p_{i}\,.} Equivalently, the min-entropyH∞(X){\displaystyle \mathrm {H} _{\infty }(X)}is the largest real numberbsuch that all events occur with probability at most⁠2−b{\displaystyle 2^{-b}}⁠. The namemin-entropystems from the fact that it is the smallest entropy measure in the family of Rényi entropies. In this sense, it is the strongest way to measure the information content of a discrete random variable. In particular, the min-entropy is never larger than theShannon entropy. The min-entropy has important applications forrandomness extractorsintheoretical computer science: Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a largeShannon entropydoes not suffice for this task. ThatHα{\displaystyle \mathrm {H} _{\alpha }}is non-increasing inα{\displaystyle \alpha }for any given distribution of probabilities⁠pi{\displaystyle p_{i}}⁠, which can be proven by differentiation,[8]as−dHαdα=1(1−α)2∑i=1nzilog⁡(zi/pi)=1(1−α)2DKL(z‖p){\displaystyle -{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}={\frac {1}{(1-\alpha )^{2}}}\sum _{i=1}^{n}z_{i}\log(z_{i}/p_{i})={\frac {1}{(1-\alpha )^{2}}}D_{KL}(z\|p)}which is proportional toKullback–Leibler divergence(which is always non-negative), wherezi=piα/∑j=1npjα{\textstyle z_{i}=p_{i}^{\alpha }/\sum _{j=1}^{n}p_{j}^{\alpha }}. In particular, it is strictly positive except when the distribution is uniform. At theα→1{\displaystyle \alpha \to 1}limit, we have−dHαdα→12∑ipi(ln⁡pi+H(p))2{\textstyle -{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}\to {\frac {1}{2}}\sum _{i}p_{i}{\left(\ln p_{i}+H(p)\right)}^{2}}. In particular cases inequalities can be proven also byJensen's inequality:[9][10]log⁡n=H0≥H1≥H2≥H∞.{\displaystyle \log n=\mathrm {H} _{0}\geq \mathrm {H} _{1}\geq \mathrm {H} _{2}\geq \mathrm {H} _{\infty }.} For values of⁠α>1{\displaystyle \alpha >1}⁠, inequalities in the other direction also hold. In particular, we have[11][12]H2≤2H∞.{\displaystyle \mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }.} On the other hand, the Shannon entropyH1{\displaystyle \mathrm {H} _{1}}can be arbitrarily high for a random variableX{\displaystyle X}that has a given min-entropy. An example of this is given by the sequence of random variablesXn∼{0,…,n}{\displaystyle X_{n}\sim \{0,\ldots ,n\}}forn≥1{\displaystyle n\geq 1}such thatP(Xn=0)=1/2{\displaystyle P(X_{n}=0)=1/2}andP(Xn=x)=1/(2n){\displaystyle P(X_{n}=x)=1/(2n)}sinceH∞(Xn)=log⁡2{\displaystyle \mathrm {H} _{\infty }(X_{n})=\log 2}but⁠H1(Xn)=(log⁡2+log⁡2n)/2{\displaystyle \mathrm {H} _{1}(X_{n})=(\log 2+\log 2n)/2}⁠. As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising theKullback–Leibler divergence.[13] TheRényi divergenceof order⁠α{\displaystyle \alpha }⁠oralpha-divergenceof a distributionPfrom a distributionQis defined to beDα(P‖Q)=1α−1log⁡(∑i=1npiαqiα−1)=1α−1log⁡Ei∼p[(pi/qi)α−1]{\displaystyle {\begin{aligned}D_{\alpha }(P\Vert Q)&={\frac {1}{\alpha -1}}\log \left(\sum _{i=1}^{n}{\frac {p_{i}^{\alpha }}{q_{i}^{\alpha -1}}}\right)\\[1ex]&={\frac {1}{\alpha -1}}\log \mathbb {E} _{i\sim p}\left[{\left(p_{i}/q_{i}\right)}^{\alpha -1}\right]\,\end{aligned}}}when⁠0<α<∞{\displaystyle 0<\alpha <\infty }⁠and⁠α≠1{\displaystyle \alpha \neq 1}⁠. We can define the Rényi divergence for the special valuesα= 0, 1, ∞by taking a limit, and in particular the limitα→ 1gives the Kullback–Leibler divergence. Some special cases: The Rényi divergence is indeed adivergence, meaning simply thatDα(P‖Q){\displaystyle D_{\alpha }(P\|Q)}is greater than or equal to zero, and zero only whenP=Q. For any fixed distributionsPandQ, the Rényi divergence is nondecreasing as a function of its orderα, and it is continuous on the set ofαfor which it is finite,[13]or for the sake of brevity, the information of orderαobtained if the distributionPis replaced by the distributionQ.[1] A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows[14]ExpectedRate=1RD1(b‖m)+R−1RD1/R(b‖m),{\displaystyle {\rm {ExpectedRate}}={\frac {1}{R}}\,D_{1}(b\|m)+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,,}wherem{\displaystyle m}is the distribution defining the official odds (i.e. the "market") for the game,b{\displaystyle b}is the investor-believed distribution andR{\displaystyle R}is the investor's risk aversion (theArrow–Pratt relative risk aversion). If the true distribution isp{\displaystyle p}(not necessarily coinciding with the investor's belief⁠b{\displaystyle b}⁠), the long-term realized rate converges to the true expectation which has a similar mathematical structure[14]RealizedRate=1R(D1(p‖m)−D1(p‖b))+R−1RD1/R(b‖m).{\displaystyle {\rm {RealizedRate}}={\frac {1}{R}}\,{\Big (}D_{1}(p\|m)-D_{1}(p\|b){\Big )}+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,.} The value⁠α=1{\displaystyle \alpha =1}⁠, which gives theShannon entropyand theKullback–Leibler divergence, is the only value at which thechain rule of conditional probabilityholds exactly:H(A,X)=H(A)+Ea∼A[H(X|A=a)]{\displaystyle \mathrm {H} (A,X)=\mathrm {H} (A)+\mathbb {E} _{a\sim A}{\big [}\mathrm {H} (X|A=a){\big ]}}for the absolute entropies, andDKL(p(x|a)p(a)‖m(x,a))=DKL(p(a)‖m(a))+Ep(a){DKL(p(x|a)‖m(x|a))},{\displaystyle D_{\mathrm {KL} }(p(x|a)p(a)\|m(x,a))=D_{\mathrm {KL} }(p(a)\|m(a))+\mathbb {E} _{p(a)}\{D_{\mathrm {KL} }(p(x|a)\|m(x|a))\},}for the relative entropies. The latter in particular means that if we seek a distributionp(x,a)which minimizes the divergence from some underlying prior measurem(x,a), and we acquire new information which only affects the distribution ofa, then the distribution ofp(x|a)remainsm(x|a), unchanged. The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively whenAandXare independent, so that ifp(A,X) =p(A)p(X), thenHα(A,X)=Hα(A)+Hα(X){\displaystyle \mathrm {H} _{\alpha }(A,X)=\mathrm {H} _{\alpha }(A)+\mathrm {H} _{\alpha }(X)\;}andDα(P(A)P(X)‖Q(A)Q(X))=Dα(P(A)‖Q(A))+Dα(P(X)‖Q(X)).{\displaystyle D_{\alpha }(P(A)P(X)\|Q(A)Q(X))=D_{\alpha }(P(A)\|Q(A))+D_{\alpha }(P(X)\|Q(X)).} The stronger properties of theα=1{\displaystyle \alpha =1}quantities allow the definition ofconditional informationandmutual informationfrom communication theory. The Rényi entropies and divergences for anexponential familyadmit simple expressions[15]Hα(pF(x;θ))=11−α(F(αθ)−αF(θ)+log⁡Ep[e(α−1)k(x)]){\displaystyle \mathrm {H} _{\alpha }(p_{F}(x;\theta ))={\frac {1}{1-\alpha }}\left(F(\alpha \theta )-\alpha F(\theta )+\log E_{p}\left[e^{(\alpha -1)k(x)}\right]\right)}andDα(p:q)=JF,α(θ:θ′)1−α{\displaystyle D_{\alpha }(p:q)={\frac {J_{F,\alpha }(\theta :\theta ')}{1-\alpha }}}whereJF,α(θ:θ′)=αF(θ)+(1−α)F(θ′)−F(αθ+(1−α)θ′){\displaystyle J_{F,\alpha }(\theta :\theta ')=\alpha F(\theta )+(1-\alpha )F(\theta ')-F(\alpha \theta +(1-\alpha )\theta ')}is a Jensen difference divergence. The Rényi entropy in quantum physics is not considered to be anobservable, due to its nonlinear dependence on thedensity matrix. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfers[citation needed]. The limit of the quantum mechanical Rényi entropy asα→1{\displaystyle \alpha \to 1}is thevon Neumann entropy.
https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
Acryptosystemis considered to haveinformation-theoretic security(also calledunconditional security[1]) if the system is secure againstadversarieswith unlimited computing resources and time. In contrast, a system which depends on the computational cost ofcryptanalysisto be secure (and thus can be broken by an attack with unlimited computation) is calledcomputationally secureor conditionally secure.[2] An encryption protocol with information-theoretic security is impossible to break even with infinite computational power. Protocols proven to be information-theoretically secure are resistant to future developments in computing. The concept of information-theoretically secure communication was introduced in 1949 by American mathematicianClaude Shannon, one of the founders of classicalinformation theory, who used it to prove theone-time padsystem was secure.[3]Information-theoretically secure cryptosystems have been used for the most sensitive governmental communications, such asdiplomatic cablesand high-level military communications.[citation needed] There are a variety of cryptographic tasks for which information-theoretic security is a meaningful and useful requirement. A few of these are: Algorithms which are computationally or conditionally secure (i.e., they are not information-theoretically secure) are dependent on resource limits. For example,RSArelies on the assertion that factoring large numbers is hard. A weaker notion of security, defined byAaron D. Wyner, established a now-flourishing area of research that is known as physical layer encryption.[4]It exploits the physicalwirelesschannel for its security by communications, signal processing, and coding techniques. The security isprovable,unbreakable, and quantifiable (in bits/second/hertz). Wyner's initial physical layer encryption work in the 1970s posed the Alice–Bob–Eve problem in which Alice wants to send a message to Bob without Eve decoding it. If the channel from Alice to Bob is statistically better than the channel from Alice to Eve, it had been shown that secure communication is possible.[5]That is intuitive, but Wyner measured the secrecy in information theoretic terms defining secrecy capacity, which essentially is the rate at which Alice can transmit secret information to Bob. Shortly afterward,Imre Csiszárand Körner showed that secret communication was possible even if Eve had a statistically better channel to Alice than Bob did.[6]The basic idea of the information theoretic approach to securely transmit confidential messages (without using an encryption key) to a legitimate receiver is to use the inherent randomness of the physical medium (including noises and channel fluctuations due to fading) and exploit the difference between the channel to a legitimate receiver and the channel to an eavesdropper to benefit the legitimate receiver.[7]More recent theoretical results are concerned with determining the secrecy capacity and optimal power allocation in broadcast fading channels.[8][9]There are caveats, as many capacities are not computable unless the assumption is made that Alice knows the channel to Eve. If that were known, Alice could simply place a null in Eve's direction. Secrecy capacity forMIMOand multiplecolludingeavesdroppers is more recent and ongoing work,[10][11]and such results still make the non-useful assumption about eavesdropper channel state information knowledge. Still other work is less theoretical by attempting to compare implementable schemes. One physical layer encryption scheme is to broadcast artificial noise in all directions except that of Bob's channel, which basically jams Eve. One paper by Negi and Goel details its implementation, and Khisti and Wornell computed the secrecy capacity when only statistics about Eve's channel are known.[12][13] Parallel to that work in the information theory community is work in the antenna community, which has been termed near-field direct antenna modulation or directional modulation.[14]It has been shown that by using aparasitic array, the transmitted modulation in different directions could be controlled independently.[15]Secrecy could be realized by making the modulations in undesired directions difficult to decode. Directional modulation data transmission was experimentally demonstrated using aphased array.[16]Others have demonstrated directional modulation withswitched arraysandphase-conjugating lenses.[17][18][19] That type of directional modulation is really a subset of Negi and Goel's additive artificial noise encryption scheme. Another scheme usingpattern-reconfigurabletransmit antennas for Alice called reconfigurablemultiplicative noise(RMN) complements additive artificial noise.[20]The two work well together in channel simulations in which nothing is assumed known to Alice or Bob about the eavesdroppers. The different works mentioned in the previous part employ, in one way or another, the randomness present in the wireless channel to transmit information-theoretically secure messages. Conversely, we could analyze how much secrecy one can extract from the randomness itself in the form of asecret key. That is the goal ofsecret key agreement. In this line of work, started by Maurer[21]and Ahlswede and Csiszár,[22]the basic system model removes any restriction on the communication schemes and assumes that the legitimate users can communicate over a two-way, public, noiseless, and authenticated channel at no cost. This model has been subsequently extended to account for multiple users[23]and a noisy channel[24]among others.
https://en.wikipedia.org/wiki/Information-theoretic_security
Asecure cryptoprocessoris a dedicatedcomputer-on-a-chipormicroprocessorfor carrying outcryptographicoperations, embedded in a packaging with multiplephysical securitymeasures, which give it a degree oftamper resistance. Unlike cryptographic processors that output decrypted data onto a bus in a secure environment, a secure cryptoprocessor does not output decrypted data or decrypted program instructions in an environment where security cannot always be maintained. The purpose of a secure cryptoprocessor is to act as the keystone of a security subsystem, eliminating the need to protect the rest of the subsystem with physical security measures.[1] Ahardware security module(HSM) contains one or more secure cryptoprocessorchips.[2][3][4]These devices are high grade secure cryptoprocessors used with enterprise servers. A hardware security module can have multiple levels of physical security with a single-chip cryptoprocessor as its most secure component. The cryptoprocessor does not reveal keys or executable instructions on a bus, except in encrypted form, and zeros keys by attempts at probing or scanning. The crypto chip(s) may also bepottedin the hardware security module with other processors and memory chips that store and process encrypted data. Any attempt to remove the potting will cause the keys in the crypto chip to be zeroed. A hardware security module may also be part of a computer (for example anATM) that operates inside a locked safe to deter theft, substitution, and tampering. Modernsmartcardsare probably the most widely deployed form of secure cryptoprocessor, although more complex and versatile secure cryptoprocessors are widely deployed in systems such asAutomated teller machines, TVset-top boxes, military applications, and high-security portable communication equipment.[citation needed]Some secure cryptoprocessors can even run general-purpose operating systems such asLinuxinside their security boundary. Cryptoprocessors input program instructions in encrypted form, decrypt the instructions to plain instructions which are then executed within the same cryptoprocessor chip where the decrypted instructions are inaccessibly stored. By never revealing the decrypted program instructions, the cryptoprocessor prevents tampering of programs by technicians who may have legitimate access to the sub-system data bus. This is known asbus encryption. Data processed by a cryptoprocessor is also frequently encrypted. TheTrusted Platform Module(TPM) is an implementation of a secure cryptoprocessor that brings the notion oftrusted computingto ordinaryPCsby enabling asecure environment.[citation needed]Present TPM implementations focus on providing a tamper-proof boot environment, and persistent and volatile storage encryption. Security chips for embedded systems are also available that provide the same level of physical protection for keys and other secret material as a smartcard processor or TPM but in a smaller, less complex and less expensive package.[citation needed]They are often referred to as cryptographicauthenticationdevices and are used to authenticate peripherals, accessories and/or consumables. Like TPMs, they are usually turnkey integrated circuits intended to be embedded in a system, usually soldered to a PC board. Security measures used in secure cryptoprocessors: Secure cryptoprocessors, while useful, are not invulnerable to attack, particularly for well-equipped and determined opponents (e.g. a government intelligence agency) who are willing to expend enough resources on the project.[5][6] One attack on a secure cryptoprocessor targeted theIBM 4758.[7]A team at the University of Cambridge reported the successful extraction of secret information from an IBM 4758, using a combination of mathematics, and special-purposecodebreakinghardware. However, this attack was not practical in real-world systems because it required the attacker to have full access to all API functions of the device. Normal and recommended practices use the integral access control system to split authority so that no one person could mount the attack.[citation needed] While the vulnerability they exploited was a flaw in the software loaded on the 4758, and not the architecture of the 4758 itself, their attack serves as a reminder that a security system is only as secure as its weakest link: the strong link of the 4758 hardware was rendered useless by flaws in the design and specification of the software loaded on it. Smartcards are significantly more vulnerable, as they are more open to physical attack. Additionally, hardware backdoors can undermine security in smartcards and other cryptoprocessors unless investment is made in anti-backdoor design methods.[8] In the case offull disk encryptionapplications, especially when implemented without abootPIN, a cryptoprocessor would not be secure against acold boot attack[9]ifdata remanencecould be exploited to dumpmemorycontents after theoperating systemhas retrieved the cryptographickeysfrom itsTPM. However, if all of the sensitive data is stored only in cryptoprocessor memory and not in external storage, and the cryptoprocessor is designed to be unable to reveal keys or decrypted or unencrypted data on chipbonding padsorsolder bumps, then such protected data would be accessible only by probing the cryptoprocessor chip after removing any packaging and metal shielding layers from the cryptoprocessor chip. This would require both physical possession of the device as well as skills and equipment beyond that of most technical personnel. Other attack methods involve carefully analyzing the timing of various operations that might vary depending on the secret value or mapping the current consumption versus time to identify differences in the way that '0' bits are handled internally vs. '1' bits. Or the attacker may apply temperature extremes, excessively high or low clock frequencies or supply voltage that exceeds the specifications in order to induce a fault. The internal design of the cryptoprocessor can be tailored to prevent these attacks. Some secure cryptoprocessors containdual processorcores and generate inaccessible encryption keys when needed so that even if the circuitry is reverse engineered, it will not reveal any keys that are necessary to securely decrypt software booted from encrypted flash memory or communicated between cores.[10] The first single-chip cryptoprocessor design was forcopy protectionof personal computer software (see US Patent 4,168,396, Sept 18, 1979) and was inspired by Bill Gates'sOpen Letter to Hobbyists. Thehardware security module(HSM), a type of secure cryptoprocessor,[3][4]was invented byEgyptian-AmericanengineerMohamed M. Atalla,[11]in 1972.[12]He invented a high security module dubbed the "Atalla Box" which encryptedPINandATMmessages, and protected offline devices with an un-guessable PIN-generating key.[13]In 1972, he filed apatentfor the device.[14]He foundedAtalla Corporation(nowUtimaco Atalla) that year,[12]and commercialized the "Atalla Box" the following year,[13]officially as the Identikey system.[15]It was acard readerandcustomer identification system, consisting of acard readerconsole, two customerPIN pads, intelligent controller and built-in electronic interface package.[15]It allowed the customer to type in a secret code, which is transformed by the device, using amicroprocessor, into another code for the teller.[16]During atransaction, the customer'saccount number was read by the card reader.[15]It was a success, and led to the wide use of high security modules.[13] Fearful that Atalla would dominate the market, banks andcredit cardcompanies began working on an international standard in the 1970s.[13]TheIBM 3624, launched in the late 1970s, adopted a similar PIN verification process to the earlier Atalla system.[17]Atalla was an early competitor toIBMin the banking security market.[14][18] At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla unveiled an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities ofprocessingonline transactionsand dealing withnetwork security. Designed with the focus of takingbank transactionsonline, the Identikey system was extended to shared-facility operations. It was consistent and compatible with variousswitchingnetworks, and was capable of resetting itself electronically to any one of 64,000 irreversiblenonlinearalgorithmsas directed bycard datainformation. The Interchange Identikey device was released in March 1976.[16]Later in 1979, Atalla introduced the firstnetwork security processor(NSP).[19]Atalla's HSM products protect 250millioncard transactionsevery day as of 2013,[12]and secure the majority of the world's ATM transactions as of 2014.[11]
https://en.wikipedia.org/wiki/Cryptoprocessor
Steven Levy(born 1951) is an American journalist and editor at large forWiredwho has written extensively for publications on computers, technology, cryptography, the internet,cybersecurity, andprivacy. He is the author of the 1984 bookHackers: Heroes of the Computer Revolution, which chronicles the early days of the computer underground. Levy published eight books coveringcomputer hackerculture,artificial intelligence,cryptography, and multi-year exposés ofApple,Google, andFacebook. His most recent book,Facebook: The Inside Story, recounts the history and rise of Facebook from three years of interviews with employees, includingChamath Palihapitiya,Sheryl Sandberg, andMark Zuckerberg.[1] Levy was born inPhiladelphiain 1951. He graduated fromCentral High Schooland received abachelor's degreein English[2]fromTemple University. He earned a master's degree in literature fromPennsylvania State University.[3] In the mid-1970s, Levy worked as a freelance journalist and frequently contributed toThe Philadelphia Inquirer'sTodaymagazine.[4][5][6]In 1976, he was a founding co-editor of theFree Times, a weekly guide to happenings in Philadelphia. He became as senior editor ofNew Jersey Monthly, and rediscoveredAlbert Einstein's brainfloating in a mason jar in theWichitaoffice of pathologistThomas Stoltz Harveywhile reporting a story in 1978.[7][8] In the 1980s, Levy's work became more focused on technology. In 1981,Rolling Stoneassigned him an article on computer hackers,[9]which he expanded into a bookHackers: Heroes of the Computer Revolution,published in 1984. He described the "hacker ethic", the belief that all information should be free and that it ought to change life for the better.[10]Levy was a contributor toStewart Brand'sWhole Earth Software Catalog, first published in 1984. He was a contributing editor toPopular Computingand wrote a monthly column in the magazine, initially called "Telecomputing"[11]and later named "Micro Journal"[12]and "Computer Journal",[13]from April 1983 to the magazine's closure in December 1985.[14]In December 1986, Levy founded theMacworldGame Hall of Fame,[15]whichMacworldpublished annually until 2009.[16]Levy stepped away from the technology beat in his second book, on the murderous past of hippie andEarth Dayco-founderIra Einhorn, published in 1988 and adapted into anNBC TV miniserieswithNaomi Wattsin 1999.[17][18][2]Levy's 1992 book about AI calledArtificial Lifewas a finalist for theLos Angeles Times Book Prize for Science and Technology.[19][20]In 1994, he published the bookInsanely Greatabout theMaccomputer.[21] Levy joinedNewsweekin 1995 as a technology writer and senior editor.[3]In July 2004, Levy published a cover story forNewsweek(which also featured an interview withAppleCEOSteve Jobs) which unveiled the 4th generation of theiPodto the world before Apple had officially done so.[22]He continued his coverage of the iPod into a book calledThe Perfect Thingpublished in 2006.[23] In 2014, he co-created the tech blogBackchannel, which was integrated intoWiredin 2017.[24]Since 2008, Levy has worked as a writer and editor at large forWired.[25]At various points throughout his career, Levy has written freelance pieces for publications includingHarper's,The New York Times Magazine,The New Yorker, andPremiere. He lives in New York City with his wifeTeresa Carpenter, a Pulitzer Prize-winning true crime and history writer.[2]They have a son.[3]
https://en.wikipedia.org/wiki/Steven_Levy
Digital Fortressis atechno-thrillernovel written by American authorDan Brownand published in 1998 bySt. Martin's Press. The book explores the theme of government surveillance of electronically stored information on the private lives of citizens, and the possiblecivil libertiesand ethical implications of using such technology. The story is set in 1996. When the United StatesNational Security Agency's (NSA) code-breakingsupercomputerTRANSLTR encounters a revolutionary new code,Digital Fortress,that it cannot break, Commander Trevor Strathmore calls in head cryptographer Susan Fletcher to crack it. She is informed by Strathmore that it was written by Ensei Tankado, a former NSA employee who became displeased with the NSA's intrusion into people's private lives. If the NSA doesn't reveal TRANSLTR to the public, Tankado intends to auction the code's algorithm on his website and have his partner, "North Dakota", release it for free if he dies, essentially holding the NSA hostage. Strathmore tells Fletcher that Tankado has in fact died inSevilleat the age of 32, of what appears to be a heart attack. Strathmore intends to keep Tankado's death a secret because if Tankado's partner finds out, he will upload the code. The agency is determined to stop Digital Fortress from becoming a threat to national security. Strathmore asks Fletcher's fiancé David Becker to travel to Seville and recover a ring that Tankado was wearing when he died. The ring is suspected to have the passcode that unlocks Digital Fortress. However, Becker soon discovers that Tankado gave the ring away just before his death. Unbeknown to Becker, a mysterious figure, named Hulohot, follows him, and murders each person he questions in the search for the ring. Unsurprisingly, Hulohot's final attempt would be on Becker himself. Meanwhile, telephone calls between North Dakota and Tokugen Numataka reveal that North Dakota hired Hulohot to kill Tankado in order to gain access to the passcode on his ring and speed up the release of the algorithm. At the NSA, Fletcher's investigation leads her to believe that Greg Hale, a fellow NSA employee, is North Dakota. Phil Chartrukian, an NSA technician who is unaware of the Digital Fortress code breaking failure and believes Digital Fortress to be a virus, conducts his own investigation into whether Strathmore allowed Digital Fortress to bypass Gauntlet, the NSA's virus/worm filter. To save the TRANSLTR Phil decides to shut it down but is murdered after being pushed off sub-levels of TRANSLTR by an unknown assailant. Since Hale and Strathmore were both in the sub-levels, Fletcher assumes that Hale is the killer; however, Hale claims that he witnessed Strathmore killing Chartrukian. Chartrukian's fall also damages TRANSLTR's cooling system. Hale holds Fletcher and Strathmore hostage to prevent himself from being arrested for Phil's murder. It is then that Hale explains to Fletcher, the e-mail he supposedly received from Tankado was also in Strathmore's inbox, as Strathmore was snooping on Tankado. Fletcher discovers through a tracer that North Dakota and Ensei Tankado are the same person, as "NDAKOTA" is an anagram of "Tankado." Strathmore kills Hale and arranges it to appear as a suicide. Fletcher later discovers through Strathmore's pager that he is the one who hired Hulohot. Becker manages to track down the ring, but ends up pursued by Hulohot in a long cat-and-mouse chase across Seville. The two eventually face off in a cathedral, where Becker finally kills Hulohot by tripping him down a spiral staircase, causing him to break his neck. He is then intercepted by NSA field agents sent by Leland Fontaine, the director of the NSA. Chapters told from Strathmore's perspective reveal his master plan. By hiring Hulohot to kill Tankado, having Becker recover his ring and at the same time arranging for Hulohot to kill Becker, he would facilitate a romantic relationship with Fletcher, regaining his lost honor. He has also been working incessantly for many months to unlock Digital Fortress, installing a backdoor inside the program. By making phone calls to Numataka posing as North Dakota, he thought he could partner with Numatech to make a Digital Fortress chip equipped with his own backdoor Trojan. Finally, he would reveal to the world the existence of TRANSLTR, boasting it would be able to crack all the codes except Digital Fortress, making everyone rush to use the computer chip equipped with Digital Fortress so that the NSA could spy on every computer equipped with these chips. However, Strathmore was unaware that Digital Fortress is actually a computer worm that, once unlocked would "eat away" all the NSA databank's security and allow "any third-grader with a modem" to look at government secrets. When TRANSLTR overheats, Strathmore dies by standing next to the machine as it explodes. The worm eventually gets into the database, but Becker figures out the passcode just seconds before the last defenses fall (3, which is the difference between the Hiroshima nuclear bomb, Isotope 235, and the Nagasaki nuclear bomb, isotope 238, a reference to thenuclear bombsthat killed Tankado's mother and left him crippled), and Fletcher is able to terminate the worm before hackers can get any significant data. The NSA allows Becker to return to the United States, reuniting him with Fletcher. In the epilogue, it is revealed that Numataka was Ensei Tankado's father who left Tankado the day he was born due to Tankado's deformity. As Tankado's last living relative, Numataka inherits the rest of Tankado's possessions. The book was criticized byGCNfor portraying facts about the NSA incorrectly and for misunderstanding the technology in the book, especially for the time when it was published.[1] In 2005, the town hall of the Spanish city of Seville invited Dan Brown to visit the city, in order to dispel the inaccuracies about Seville that Brown represented within the book.[2] Althoughuranium-235was used in the bomb on Hiroshima, the nuclear bomb dropped on Nagasaki usedplutonium-239(created from U-238).Uranium-238is non-fissile. Julius Caesar's cypher was not as simple as the one described in the novel, based on square numbers. InThe Code BookbySimon Singhit is described as a transposition cypher which was undecipherable until centuries later. The story behind the meaning of "sincere" is based onfalse etymology.[3] It is also untrue that in Spain (or in any otherCatholiccountry) that theHoly Communiontakes place at the beginning ofMass; Communion takes place very near the end. In 2020, the book was featured on the podcast372 Pages We'll Never Get Back, which critiques literature deemed low-quality. Imagine Entertainmentannounced in 2014 that it is set to produce a television series based onDigital Fortress, to be written by Josh Goldin and Rachel Abramowitz.[4] Digital Fortresshas been widely translated:
https://en.wikipedia.org/wiki/Digital_Fortress
Ahardware backdooris abackdoorimplemented within the physical components of acomputer system, also known as itshardware. They can be created by introducing malicious code to a component'sfirmware, or even during the manufacturing process of anintegrated circuit.[1][2]Often, they are used to undermine security insmartcardsandcryptoprocessors, unless investment is made in anti-backdoor design methods.[3]They have also been considered forcar hacking.[4] Backdoors differ fromhardware Trojansas backdoors are introduced intentionally by the original designer or during the design process, whereas hardware Trojans are inserted later by an external party.[5] The existence of hardware backdoors poses significant security risks for several reasons. They are difficult to detect and are impossible to remove using conventional methods likeantivirus software. They can also bypass other security measures, such asdisk encryption. Hardware trojans can be introduced during manufacturing where the end-user lacks control over the production chain.[1] In 2008, theFBIreported the discovery of approximately 3,500 counterfeitCisconetwork components in the United States, some of which were introduced in military and government infrastructure.[6] A few years later, in 2011,Jonathan Brossardpresented "Rakshasa", a proof-of-concept hardware backdoor. This backdoor could be installed by an individual with physical access to the hardware. It utilizedcorebootto re-flash theBIOSwith aSeaBIOSandiPXE-based bootkit composed of legitimate, open-source tools, allowing malware to be fetched from the internet during the boot process.[1] The following year, in 2012, Sergei Skorobogatov and Christopher Woods from the University ofCambridge Computer Laboratoryreported the discovery of a backdoor in a military-grade FPGA device, which could be exploited to access and modify sensitive information.[7][8][9]It has been said that this was proven to be a software problem and not a deliberate attempt at sabotage. This still brought to attention that equipment manufacturers should ensure that microchips operate as intended.[10][11]Later that year, two mobile phones developed by the Chinese companyZTEwere found to carry aroot accessbackdoor. According to security researcherDmitri Alperovitch, the exploit used ahard-codedpassword in its software.[12] Starting in 2012, the United States stated thatHuaweimight have backdoors present in their products.[13] In 2013, researchers at theUniversity of Massachusettsdevised a method of breaking a CPU's internal cryptographic mechanisms by introducing specific impurities into the crystalline structure of transistors to change Intel'srandom-number generator.[14] Documents revealed from 2013 onwards duringthe surveillance disclosuresinitiated byEdward Snowdenshowed that theTailored Access Operations(TAO) unit and other NSA employees intercepted servers, routers, and other network gear being shipped to organizations targeted for surveillance to install covert implant firmware onto them before delivery.[15][16]These tools include customBIOSexploits that survive the reinstallation of operating systems and USB cables with spy hardware and radio transceiver packed inside.[17] In June 2016 it was reported thatUniversity of MichiganDepartment of Electrical Engineering and Computer Science had built a hardware backdoor that leveraged "analog circuits to create a hardware attack" so that after the capacitors store up enough electricity to be fully charged, it would be switched on, to give an attacker complete access to whatever system or device − such as a PC − that contains the backdoored chip. In the study that won the "best paper" award at the IEEE Symposium on Privacy and Security they also note that microscopic hardware backdoor wouldn't be caught by practically any modern method of hardware security analysis, and could be planted by a single employee of a chip factory.[18][19] In September 2016 Skorobogatov showed how he had removed aNANDchip from aniPhone 5C- the main memory storage system used on many Apple devices - and cloned it so that he can try out more incorrect combinations than allowed by the attempt-counter.[20] In October 2018Bloomberg reportedthat an attack by Chinese spies reached almost 30 U.S. companies, including Amazon and Apple, by compromising America's technology supply chain.[21] Skorobogatov has developed a technique capable of detecting malicious insertions into chips.[11] New York University Tandon School of Engineeringresearchers have developed a way to corroborate a chip's operation usingverifiable computingwhereby "manufactured for sale" chips contain an embedded verification module that proves the chip's calculations are correct and an associated external module validates the embedded verification module.[10]Another technique developed by researchers atUniversity College London(UCL) relies on distributing trust between multiple identical chips from disjoint supply chains. Assuming that at least one of those chips remains honest the security of the device is preserved.[22] Researchers at theUniversity of Southern CaliforniaMing Hsieh Department of Electrical and Computer Engineeringand the Photonic Science Division at thePaul Scherrer Institutehave developed a new technique called Ptychographic X-ray laminography.[23]This technique is the only current method that allows for verification of the chips blueprint and design without destroying or cutting the chip. It also does so in significantly less time than other current methods.Anthony F. J. LeviProfessor of electrical and computer engineering at University of Southern California explains “It’s the only approach to non-destructive reverse engineering of electronic chips—[and] not just reverse engineering but assurance that chips are manufactured according to design. You can identify the foundry, aspects of the design, who did the design. It’s like a fingerprint.”[23]This method currently is able to scan chips in 3D and zoom in on sections and can accommodate chips up to 12 millimeters by 12 millimeters easily accommodating anApple A12chip but not yet able to scan a fullNvidia Volta GPU.[23]"Future versions of the laminography technique could reach a resolution of just 2 nanometers or reduce the time for a low-resolution inspection of that 300-by-300-micrometer segment to less than an hour, the researchers say."[23]
https://en.wikipedia.org/wiki/Hardware_backdoor
Titaniumis a very advancedbackdoormalwareAPT, developed byPLATINUM, acybercrimecollective. The malware was uncovered byKaspersky Laband reported on 8 November 2019.[1][2][3][4][5][6][7]According toGlobal Security Mag, "Titanium APT includes a complex sequence of dropping, downloading and installing stages, with deployment of a Trojan-backdoor at the final stage."[2]Much of the sequence is hidden from detection in a sophisticated manner, including hiding datasteganographicallyin aPNG image.[3]In their announcement report, Kaspersky Lab concluded: "The Titanium APT has a very complicated infiltration scheme. It involves numerous steps and requires good coordination between all of them. In addition, none of the files in the file system can be detected as malicious due to the use of encryption andfilelesstechnologies. One other feature that makes detection harder is the mimicking of well-known software. Regarding campaign activity, we have not detected any current activity [as of 8 November 2019] related to the Titanium APT."[1]
https://en.wikipedia.org/wiki/Titanium_(malware)
Flash memoryis anelectronicnon-volatilecomputer memorystorage mediumthat can be electrically erased and reprogrammed. The two main types of flash memory,NOR flashandNAND flash, are named for theNORandNANDlogic gates. Both use the same cell design, consisting offloating-gate MOSFETs. They differ at the circuit level, depending on whether the state of the bit line or word lines is pulled high or low; in NAND flash, the relationship between the bit line and the word lines resembles a NAND gate; in NOR flash, it resembles a NOR gate. Flash memory, a type offloating-gatememory, was invented byFujio MasuokaatToshibain 1980 and is based onEEPROMtechnology. Toshiba began marketing flash memory in 1987.[1]EPROMshad to be erased completely before they could be rewritten. NAND flash memory, however, may be erased, written, and read in blocks (or pages), which generally are much smaller than the entire device. NOR flash memory allows a singlemachine wordto be written – to an erased location – or read independently. A flash memory device typically consists of one or more flashmemory chips(each holding many flash memory cells), along with a separateflash memory controllerchip. The NAND type is found mainly inmemory cards,USB flash drives,solid-state drives(those produced since 2009),feature phones,smartphones, and similar products, for general storage and transfer of data. NAND or NOR flash memory is also often used to store configuration data in digital products, a task previously made possible by EEPROM or battery-poweredstatic RAM. A key disadvantage of flash memory is that it can endure only a relatively small number of write cycles in a specific block.[2] NOR flash is known for its direct random access capabilities, making it apt for executing code directly. Its architecture allows for individual byte access, facilitating faster read speeds compared to NAND flash. NAND flash memory operates with a different architecture, relying on a serial access approach. This makes NAND suitable for high-density data storage, but less efficient for random access tasks. NAND flash is often employed in scenarios where cost-effective, high-capacity storage is crucial, such as in USB drives, memory cards, and solid-state drives (SSDs). The primary differentiator lies in their use cases and internal structures. NOR flash is optimal for applications requiring quick access to individual bytes, as in embedded systems for program execution. NAND flash, on the other hand, shines in scenarios demanding cost-effective, high-capacity storage with sequential data access. Flash memory[3]is used incomputers,PDAs,digital audio players,digital cameras,mobile phones,synthesizers,video games,scientific instrumentation,industrial robotics, andmedical electronics. Flash memory has a fast readaccess timebut is not as fast as static RAM or ROM. In portable devices, it is preferred to use flash memory because of its mechanical shock resistance, since mechanical drives are more prone to mechanical damage.[4] Because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over non-flash EEPROM when writing large amounts of data. As of 2019,[update]flash memory costs greatly less than byte-programmable EEPROM and had become the dominant memory type wherever a system required a significant amount of non-volatilesolid-state storage. EEPROMs, however, are still used in applications that require only small amounts of storage, e.g. inSPDimplementations on computer-memory modules.[5][6] Flash memory packages can usedie stackingwiththrough-silicon viasand several dozen layers of 3D TLC NAND cells (per die) simultaneously to achieve capacities of up to 1tebibyteper package using 16 stacked dies and an integratedflash controlleras a separate die inside the package.[7][8][9][10] The origins of flash memory can be traced to the development of thefloating-gate MOSFET (FGMOS), also known as the floating-gate transistor.[11][12]The originalMOSFETwas invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered surface passivation and used their discovery to create the first planar transistors.[13][14][15][16][17][18]Dawon Kahngwent on to develop a variation, the floating-gate MOSFET, with Taiwanese-American engineerSimon Min Szeat Bell Labs in 1967.[19]They proposed that it could be used as floating-gatememory cellsfor storing a form of programmableread-only memory(PROM) that is both non-volatile and re-programmable.[19] Early types of floating-gate memory included EPROM (erasable PROM) and EEPROM (electrically erasable PROM) in the 1970s.[19]However, early floating-gate memory required engineers to build a memory cell for eachbitof data, which proved to be cumbersome,[20]slow,[21]and expensive, restricting floating-gate memory to niche applications in the 1970s, such asmilitary equipmentand the earliest experimentalmobile phones.[11] ModernEEPROMbased onFowler-Nordheim tunnellingto erase data was invented by Bernward and patented bySiemensin 1974.[22]It was further developed between 1976 and 1978 byEliyahou HarariatHughes Aircraft Company, as well as byGeorge Perlegosand others at Intel.[23][24]This led to Masuoka's invention of flash memory at Toshiba in 1980.[20][25][26]The improvement between EEPROM and flash being that flash is programmed in blocks while EEPROM is programmed in bytes. According to Toshiba, the name "flash" was suggested by Masuoka's colleague, Shōji Ariizumi, because the erasure process of the memory contents reminded him of theflash of a camera.[27]Masuoka and colleagues presented the invention ofNORflash in 1984,[28][29]and thenNANDflash at theIEEE1987 International Electron Devices Meeting(IEDM) held in San Francisco.[30] Toshiba commercially launched NAND flash memory in 1987.[1][19]Intel Corporationintroduced the first commercial NOR type flash chip in 1988.[31]NOR-based flash has long erase and write times, but provides full address anddata buses, allowingrandom accessto anymemory location. This makes it a suitable replacement for olderread-only memory(ROM) chips, which are used to store program code that rarely needs to be updated, such as a computer'sBIOSor thefirmwareofset-top boxes. Its endurance may be from as little as 100 erase cycles for an on-chip flash memory,[32]to a more typical 10,000 or 100,000 erase cycles, up to 1,000,000 erase cycles.[33]NOR-based flash was the basis of early flash-based removable media;CompactFlashwas originally based on it, although later cards moved to less expensive NAND flash. NAND flash has reduced erase and write times, and requires less chip area per cell, thus allowing greater storage density and lower cost per bit than NOR flash. However, the I/O interface of NAND flash does not provide a random-access external address bus. Rather, data must be read on a block-wise basis, with typical block sizes of hundreds to thousands of bits. This makes NAND flash unsuitable as a drop-in replacement for program ROM, since most microprocessors and microcontrollers require byte-level random access. In this regard, NAND flash is similar to other secondarydata storage devices, such as hard disks andoptical media, and is thus highly suitable for use in mass-storage devices, such asmemory cardsandsolid-state drives(SSD). For example, SSDs store data using multiple NAND flash memory chips. The first NAND-based removable memory card format wasSmartMedia, released in 1995. Many others followed, includingMultiMediaCard,Secure Digital,Memory Stick, andxD-Picture Card. A new generation of memory card formats, includingRS-MMC,miniSDandmicroSD, feature extremely small form factors. For example, the microSD card has an area of just over 1.5 cm2, with a thickness of less than 1 mm. NAND flash has achieved significant levels of memorydensityas a result of several major technologies that were commercialized during the late 2000s to early 2010s.[34] NOR flash was the most common type of Flash memory sold until 2005, when NAND flash overtook NOR flash in sales.[35] Multi-level cell(MLC) technology stores more than onebitin eachmemory cell.NECdemonstratedmulti-level cell(MLC) technology in 1998, with an 80Mbflash memory chip storing 2 bits per cell.[36]STMicroelectronicsalso demonstrated MLC in 2000, with a 64MBNOR flashmemory chip.[37]In 2009, Toshiba andSanDiskintroduced NAND flash chips with QLC technology storing 4 bits per cell and holding a capacity of 64Gb.[38][39]Samsung Electronicsintroducedtriple-level cell(TLC) technology storing 3-bits per cell, and began mass-producing NAND chips with TLC technology in 2010.[40] Charge trap flash(CTF) technology replaces the polysilicon floating gate, which is sandwiched between a blocking gate oxide above and a tunneling oxide below it, with an electrically insulating silicon nitride layer; the silicon nitride layer traps electrons. In theory, CTF is less prone to electron leakage, providing improved data retention.[41][42][43][44][45][46] Because CTF replaces the polysilicon with an electrically insulating nitride, it allows for smaller cells and higher endurance (lower degradation or wear). However, electrons can become trapped and accumulate in the nitride, leading to degradation. Leakage is exacerbated at high temperatures since electrons become more excited with increasing temperatures. CTF technology, however, still uses a tunneling oxide and blocking layer, which are the weak points of the technology, since they can still be damaged in the usual ways (the tunnel oxide can be degraded due to extremely high electric fields and the blocking layer due to Anode Hot Hole Injection (AHHI).[47][48] Degradation or wear of the oxides is the reason why flash memory has limited endurance. Data retention goes down (the potential for data loss increases) with increasing degradation, since the oxides lose their electrically-insulating characteristics as they degrade. The oxides must insulate against electrons to prevent them from leaking, which would cause data loss. In 1991,NECresearchers, including N. Kodama, K. Oyama and Hiroki Shirai, described a type of flash memory with a charge-trap method.[49]In 1998, Boaz Eitan ofSaifun Semiconductors(later acquired bySpansion)patenteda flash memory technology named NROM that took advantage of a charge trapping layer to replace the conventionalfloating gateused in conventional flash memory designs.[50]In 2000, anAdvanced Micro Devices(AMD) research team led by Richard M. Fastow, Egyptian engineer Khaled Z. Ahmed and Jordanian engineer Sameer Haddad (who later joined Spansion) demonstrated a charge-trapping mechanism for NOR flash memory cells.[51]CTF was later commercialized by AMD andFujitsuin 2002.[52]3DV-NAND(vertical NAND) technology stacks NAND flash memory cells vertically within a chip using 3D charge trap flash (CTP) technology. 3D V-NAND technology was first announced by Toshiba in 2007,[53]and the first device, with 24 layers, was commercialized bySamsung Electronicsin 2013.[54][55] 3D integrated circuit(3D IC) technology stacksintegrated circuit(IC) chips vertically into a single 3D IC package.[34]Toshiba introduced 3D IC technology to NAND flash memory in April 2007, when they debuted a 16GBeMMC compliant (product number THGAM0G7D8DBAI6, often abbreviated THGAM on consumer websites) embedded NAND flash memory package, which was manufactured with eight stacked 2GB NAND flash chips.[56]In September 2007,Hynix Semiconductor(nowSK Hynix) introduced 24-layer 3D IC technology, with a 16GB flash memory package that was manufactured with 24 stacked NAND flash chips using a wafer bonding process.[57]Toshiba also used an eight-layer 3D IC for their 32GB THGBM flash package and in 2008.[58]In 2010, Toshiba used a 16-layer 3D IC for their 128GB THGBM2 flash package, which was manufactured with 16 stacked 8GB chips.[59]In the 2010s, 3D ICs came into widespread commercial use for NAND flash memory inmobile devices.[34] In 2016, Micron and Intel introduced a technology known as CMOS Under the Array/CMOS Under Array (CUA), Core over Periphery (COP), Periphery Under Cell (PUA), or Xtacking,[60]in which the control circuitry for the flash memory is placed under or above the flash memory cell array. This has allowed for an increase in the number of planes or sections a flash memory chip has, increasing from two planes to four, without increasing the area dedicated to the control or periphery circuitry. This increases the number of IO operations per flash chip or die, but it also introduces challenges when building capacitors for charge pumps used to write to the flash memory.[61][62][63]Some flash dies have as many as 6 planes.[64] As of August 2017, microSD cards with a capacity up to 400GB(400 billion bytes) were available.[65][66]Samsung combined 3D IC chip stacking with its 3D V-NAND and TLC technologies to manufacture its 512GB KLUFG8R1EM flash memory package with eight stacked 64-layer V-NAND chips.[8]In 2019, Samsung produced a 1024GBflash package, with eight stacked 96-layer V-NAND package and with QLC technology.[67][68] In 2025, researchers announced experimental success with a device a 400-picosecond write time.[69] Flash memory stores information in an array of memory cells made fromfloating-gate transistors. Insingle-level cell(SLC) devices, each cell stores only one bit of information.Multi-level cell(MLC) devices, includingtriple-level cell(TLC) devices, can store more than one bit per cell. The floating gate may be conductive (typicallypolysiliconin most kinds of flash memory) or non-conductive (as inSONOSflash memory).[70] In flash memory, each memory cell resembles a standardmetal–oxide–semiconductor field-effect transistor(MOSFET) except that the transistor has two gates instead of one. The cells can be seen as an electrical switch in which current flows between two terminals (source and drain) and is controlled by a floating gate (FG) and a control gate (CG). The CG is similar to the gate in other MOS transistors, but below this is the FG, which is insulated all around by an oxide layer. The FG is interposed between the CG and the MOSFET channel. Because the FG is electrically isolated by its insulating layer, electrons placed on it are trapped. When the FG is charged with electrons, this chargescreenstheelectric fieldfrom the CG, thus increasing thethreshold voltage(VT) of the cell. This means that the VTof the cell can be changed between theuncharged FG threshold voltage(VT1) and the highercharged FG threshold voltage(VT2) by changing the FG charge. In order to read a value from the cell, an intermediate voltage (VI) between VT1and VT2is applied to the CG. If the channel conducts at VI, the FG must be uncharged (if it were charged, there would not be conduction because VIis less than VT2). If the channel does not conduct at the VI, it indicates that the FG is charged. The binary value of the cell is sensed by determining whether there is current flowing through the transistor when VIis asserted on the CG. In a multi-level cell device, which stores more than onebitper cell, the amount of current flow is sensed (rather than simply its presence or absence), in order to determine more precisely the level of charge on the FG. Floating gate MOSFETs are so named because there is an electrically insulating tunnel oxide layer between the floating gate and the silicon, so the gate "floats" above the silicon. The oxide keeps the electrons confined to the floating gate. Degradation or wear (and the limited endurance of floating gate Flash memory) occurs due to the extremely highelectric field(10 million volts per centimeter) experienced by the oxide. Such high voltage densities can break atomic bonds over time in the relatively thin oxide, gradually degrading its electrically insulating properties and allowing electrons to be trapped in and pass through freely (leak) from the floating gate into the oxide, increasing the likelihood of data loss since the electrons (the quantity of which is used to represent different charge levels, each assigned to a different combination of bits in MLC Flash) are normally in the floating gate. This is why data retention goes down and the risk of data loss increases with increasing degradation.[71][72][45][73][74]The silicon oxide in a cell degrades with every erase operation. The degradation increases the amount of negative charge in the cell over time due to trapped electrons in the oxide and negates some of the control gate voltage. Over time, this also makes erasing the cell slower; to maintain the performance and reliability of the NAND chip, the cell must be retired from use. Endurance also decreases with the number of bits in a cell. With more bits in a cell, the number of possible states (each represented by a different voltage level) in a cell increases and is more sensitive to the voltages used for programming. Voltages may be adjusted to compensate for degradation of the silicon oxide, and as the number of bits increases, the number of possible states also increases and thus the cell is less tolerant of adjustments to programming voltages, because there is less space between the voltage levels that define each state in a cell.[75] The process of moving electrons from the control gate and into the floating gate is calledFowler–Nordheim tunneling, and it fundamentally changes the characteristics of the cell by increasing the MOSFET's threshold voltage. This, in turn, changes the drain-source current that flows through the transistor for a given gate voltage, which is ultimately used to encode a binary value. The Fowler-Nordheim tunneling effect is reversible, so electrons can be added to or removed from the floating gate, processes traditionally known as writing and erasing.[76] Despite the need for relatively high programming and erasing voltages, virtually all flash chips today require only a single supply voltage and produce the high voltages that are required using on-chipcharge pumps. Over half the energy used by a 1.8 V-NAND flash chip is lost in the charge pump itself. Sinceboost convertersare inherently more efficient than charge pumps, researchers developinglow-powerSSDs have proposed returning to the dual Vcc/Vpp supply voltages used on all early flash chips, driving the high Vpp voltage for all flash chips in an SSD with a single shared external boost converter.[77][78][79][80][81][82][83][84] In spacecraft and other high-radiation environments, the on-chip charge pump is the first part of the flash chip to fail, although flash memories will continue to work – in read-only mode – at much higher radiation levels.[85] In NOR flash, each cell has one end connected directly to ground, and the other end connected directly to a bit line. This arrangement is called "NOR flash" because it acts like aNOR gate;when one of the word lines (connected to the cell's CG) is brought high, the corresponding storage transistor acts to pull the output bit line low. NOR flash continues to be the technology of choice for embedded applications requiring a discrete non-volatile memory device.[citation needed]The low read latencies characteristic of NOR devices allow for both direct code execution and data storage in a single memory product.[86] A single-level NOR flash cell in its default state is logically equivalent to a binary "1" value, because current will flow through the channel under application of an appropriate voltage to the control gate, so that the bitline voltage is pulled down. A NOR flash cell can be programmed, or set to a binary "0" value, by the following procedure: To erase a NOR flash cell (resetting it to the "1" state), a large voltageof the opposite polarityis applied between the CG and source terminal, pulling the electrons off the FG throughFowler–Nordheim tunneling(FN tunneling).[87]This is known as Negative gate source source erase. Newer NOR memories can erase using negative gate channel erase, which biases the wordline on a NOR memory cell block and the P-well of the memory cell block to allow FN tunneling to be carried out, erasing the cell block. Older memories used source erase, in which a high voltage was applied to the source and then electrons from the FG were moved to the source.[88][89]Modern NOR flash memory chips are divided into erase segments (often called blocks or sectors). The erase operation can be performed only on a block-wise basis; all the cells in an erase segment must be erased together.[90]Programming of NOR cells, however, generally can be performed one byte or word at a time. NAND flash also usesfloating-gate transistors, but they are connected in a way that resembles aNAND gate: several transistors are connected in series, and the bit line is pulled low only if all the word lines are pulled high (above the transistors' VT). These groups are then connected via some additional transistors to a NOR-style bit line array in the same way that single transistors are linked in NOR flash. Compared to NOR flash, replacing single transistors with serial-linked groups adds an extra level of addressing. Whereas NOR flash might address memory by page then word, NAND flash might address it by page, word and bit. Bit-level addressing suits bit-serial applications (such as hard disk emulation), which access only one bit at a time.Execute-in-placeapplications, on the other hand, require every bit in a word to be accessed simultaneously. This requires word-level addressing. In any case, both bit and word addressing modes are possible with either NOR or NAND flash. To read data, first the desired group is selected (in the same way that a single transistor is selected from a NOR array). Next, most of the word lines are pulled up above VT2, while one of them is pulled up to VI. The series group will conduct (and pull the bit line low) if the selected bit has not been programmed. Despite the additional transistors, the reduction in ground wires and bit lines allows a denser layout and greater storage capacity per chip. (The ground wires and bit lines are actually much wider than the lines in the diagrams.) In addition, NAND flash is typically permitted to contain a certain number of faults (NOR flash, as is used for aBIOSROM, is expected to be fault-free). Manufacturers try to maximize the amount of usable storage by shrinking the size of the transistors or cells, however the industry can avoid this and achieve higher storage densities per die by using 3D NAND, which stacks cells on top of each other. NAND flash cells are read by analysing their response to various voltages.[73] NAND flash usestunnel injectionfor writing andtunnel releasefor erasing. NAND flash memory forms the core of the removableUSBstorage devices known asUSB flash drives, as well as mostmemory cardformats andsolid-state drivesavailable today. The hierarchical structure of NAND flash starts at a cell level which establishes strings, then pages, blocks, planes and ultimately a die. A string is a series of connected NAND cells in which the source of one cell is connected to the drain of the next one. Depending on the NAND technology, a string typically consists of 32 to 128 NAND cells. Strings are organised into pages which are then organised into blocks in which each string is connected to a separate line called a bitline. All cells with the same position in the string are connected through the control gates by a wordline. A plane contains a certain number of blocks that are connected through the same bitline. A flash die consists of one or more planes, and the peripheral circuitry that is needed to perform all the read, write, and erase operations. The architecture of NAND flash means that data can be read and programmed (written) in pages, typically between 4 KiB and 16 KiB in size, but can only be erased at the level of entire blocks consisting of multiple pages. When a block is erased, all the cells are logically set to 1. Data can only be programmed in one pass to a page in a block that was erased. The programming process is set one or more cells from 1 to 0. Any cells that have been set to 0 by programming can only be reset to 1 by erasing the entire block. This means that before new data can be programmed into a page that already contains data, the current contents of the page plus the new data must all be copied to a new, erased page. If a suitable erased page is available, the data can be written to it immediately. If no erased page is available, a block must be erased before copying the data to a page in that block. The old page is then marked as invalid and is available for erasing and reuse.[91]This is different from operating systemLBAview, for example, if operating system writes 1100 0011 to the flash storage device (such asSSD), the data actually written to the flash memory may be 0011 1100. Vertical NAND (V-NAND) or 3D NAND memory stacks memory cells vertically and uses acharge trap flasharchitecture. The vertical layers allow larger areal bit densities without requiring smaller individual cells.[92]It is also sold under the trademarkBiCS Flash, which is a trademark of Kioxia Corporation (formerly Toshiba Memory Corporation). 3D NAND was first announced byToshibain 2007.[53]V-NAND was first commercially manufactured bySamsung Electronicsin 2013.[54][55][93][94] V-NAND uses acharge trap flashgeometry (which was commercially introduced in 2002 byAMDandFujitsu)[52]that stores charge on an embeddedsilicon nitridefilm. Such a film is more robust against point defects and can be made thicker to hold larger numbers of electrons. V-NAND wraps a planar charge trap cell into a cylindrical form.[92]As of 2020, 3D NAND flash memories by Micron and Intel instead use floating gates, however, Micron 128 layer and above 3D NAND memories use a conventional charge trap structure, due to the dissolution of the partnership between Micron and Intel. Charge trap 3D NAND flash is thinner than floating gate 3D NAND. In floating gate 3D NAND, the memory cells are completely separated from one another, whereas in charge trap 3D NAND, vertical groups of memory cells share the same silicon nitride material.[95] An individual memory cell is made up of one planar polysilicon layer containing a hole filled by multiple concentric vertical cylinders. The hole's polysilicon surface acts as the gate electrode. The outermost silicon dioxide cylinder acts as the gate dielectric, enclosing a silicon nitride cylinder that stores charge, in turn enclosing a silicon dioxide cylinder as the tunnel dielectric that surrounds a central rod of conducting polysilicon which acts as the conducting channel.[92] Memory cells in different vertical layers do not interfere with each other, as the charges cannot move vertically through the silicon nitride storage medium, and the electric fields associated with the gates are closely confined within each layer. The vertical collection is electrically identical to the serial-linked groups in which conventional NAND flash memory is configured.[92]There is also string stacking, which builds several 3D NAND memory arrays or "plugs"[96]separately, but stacked together to create a product with a higher number of 3D NAND layers on a single die. Often, two or 3 arrays are stacked. The misalignment between plugs is in the order of 30 to 10nm.[61][97][98] Growth of a group of V-NAND cells begins with an alternating stack of conducting (doped) polysilicon layers and insulating silicon dioxide layers.[92] The next step is to form a cylindrical hole through these layers. In practice, a 128GbitV-NAND chip with 24 layers of memory cells requires about 2.9 billion such holes. Next, the hole's inner surface receives multiple coatings, first silicon dioxide, then silicon nitride, then a second layer of silicon dioxide. Finally, the hole is filled with conducting (doped) polysilicon.[92] As of 2013,[update]V-NAND flash architecture allows read and write operations twice as fast as conventional NAND and can last up to 10 times as long, while consuming 50 percent less power. They offer comparable physical bit density using 10-nm lithography but may be able to increase bit density by up to two orders of magnitude, given V-NAND's use of up to several hundred layers.[92]As of 2020, V-NAND chips with 160 layers are under development by Samsung.[99]As the number of layers increases, the capacity and endurance of flash memory may be increased. The wafer cost of a 3D NAND is comparable with scaled down (32 nm or less) planar NAND flash.[100]However, with planar NAND scaling stopping at 16 nm, the cost per bit reduction can continue by 3D NAND starting with 16 layers. However, due to the non-vertical sidewall of the hole etched through the layers; even a slight deviation leads to a minimum bit cost, i.e., minimum equivalent design rule (or maximum density), for a given number of layers; this minimum bit cost layer number decreases for smaller hole diameter.[101] One limitation of flash memory is that it can be erased only a block at a time. This generally sets all bits in the block to 1. Starting with a freshly erased block, any location within that block can be programmed. However, once a bit has been set to 0, only by erasing the entire block can it be changed back to 1. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations but does not offer arbitrary random-access rewrite or erase operations. A location can, however, be rewritten as long as the new value's 0 bits are a superset of the over-written values. For example, anibblevalue may be erased to 1111, then written as 1110. Successive writes to that nibble can change it to 1010, then 0010, and finally 0000. Essentially, erasure sets all bits to 1, and programming can only clear bits to 0.[102]Some file systems designed for flash devices make use of this rewrite capability, for exampleYAFFS1, to represent sector metadata. Other flash file systems, such asYAFFS2, never make use of this "rewrite" capability – they do a lot of extra work to meet a "write once rule". Although data structures in flash memory cannot be updated in completely general ways, this allows members to be "removed" by marking them as invalid. This technique may need to be modified formulti-level celldevices, where one memory cell holds more than one bit. Common flash devices such asUSB flash drivesand memory cards provide only a block-level interface, orflash translation layer(FTL), which writes to a different cell each time to wear-level the device. This prevents incremental writing within a block; however, it does help the device from being prematurely worn out by intensive write patterns. Data stored on flash cells is steadily lost due to electron detrapping[definition needed]. The rate of loss increases exponentially as theabsolute temperatureincreases. For example: For a 45 nm NOR flash, at 1000 hours, the threshold voltage (Vt) loss at 25°C is about half that at 90°C.[103] Another limitation is that flash memory has a finite number of program–erase cycles (typically written as P/E cycles).[104][105]Micron TechnologyandSun Microsystemsannounced an SLC NAND flash memory chip rated for 1,000,000 P/E cycles on 17 December 2008.[106] The guaranteed cycle count may apply only to block zero (as is the case withTSOPNAND devices), or to all blocks (as in NOR). This effect is mitigated in some chip firmware or file system drivers by counting the writes and dynamically remapping blocks in order to spread write operations between sectors; this technique is calledwear leveling. Another approach is to perform write verification and remapping to spare sectors in case of write failure, a technique calledbad blockmanagement (BBM). For portable consumer devices, these wear out management techniques typically extend the life of the flash memory beyond the life of the device itself, and some data loss may be acceptable in these applications. For high-reliability data storage, however, it is not advisable to use flash memory that would have to go through a large number of programming cycles. This limitation also exists for "read-only" applications such asthin clientsandrouters, which are programmed only once or at most a few times during their lifetimes, due toread disturb(see below). In December 2012, Taiwanese engineers from Macronix revealed their intention to announce at the 2012 IEEE International Electron Devices Meeting that they had figured out how to improve NAND flash storage read/write cycles from 10,000 to 100 million cycles using a "self-healing" process that used a flash chip with "onboard heaters that could anneal small groups of memory cells."[107]The built-in thermal annealing was to replace the usual erase cycle with a local high temperature process that not only erased the stored charge, but also repaired the electron-induced stress in the chip, giving write cycles of at least 100 million.[108]The result was to be a chip that could be erased and rewritten over and over, even when it should theoretically break down. As promising as Macronix's breakthrough might have been for the mobile industry, however, there were no plans for a commercial product featuring this capability to be released any time in the near future.[109] The method used to read NAND flash memory can cause nearby cells in the same memory block to change over time (become programmed). This is known as read disturb. The threshold number of reads is generally in the hundreds of thousands of reads between intervening erase operations. If reading continually from one cell, that cell will not fail but rather one of the surrounding cells will on a subsequent read. To avoid the read disturb problem the flash controller will typically count the total number of reads to a block since the last erase. When the count exceeds a target limit, the affected block is copied over to a new block, erased, then released to the block pool. The original block is as good as new after the erase. If the flash controller does not intervene in time, however, aread disturberror will occur with possible data loss if the errors are too numerous to correct with anerror-correcting code.[110][111][112] Most flash ICs come inball grid array(BGA) packages, and even the ones that do not are often mounted on a PCB next to other BGA packages. AfterPCB Assembly, boards with BGA packages are often X-rayed to see if the balls are making proper connections to the proper pad, or if the BGA needsrework. These X-rays can erase programmed bits in a flash chip (convert programmed "0" bits into erased "1" bits). Erased bits ("1" bits) are not affected by X-rays.[113][114] Some manufacturers are now making X-ray proof SD[115]and USB[116]memory devices. The low-level interface to flash memory chips differs from those of other memory types such asDRAM,ROM, andEEPROM, which support bit-alterability (both zero to one and one to zero) andrandom accessvia externally accessibleaddress buses. NOR memory has an external address bus for reading and programming. For NOR memory, reading and programming are random-access, and unlocking and erasing are block-wise. For NAND memory, reading and programming are page-wise, and unlocking and erasing are block-wise. Reading from NOR flash is similar to reading from random-access memory, provided the address and data bus are mapped correctly. Because of this, most microprocessors can use NOR flash memory asexecute in place(XIP) memory,[117]meaning that programs stored in NOR flash can be executed directly from the NOR flash without needing to be copied into RAM first. NOR flash may be programmed in a random-access manner similar to reading. Programming changes bits from a logical one to a zero. Bits that are already zero are left unchanged. Erasure must happen a block at a time, and resets all the bits in the erased block back to one. Typical block sizes are 64, 128, or 256KiB. Bad block management is a relatively new feature in NOR chips. In older NOR devices not supporting bad block management, the software ordevice drivercontrolling the memory chip must correct for blocks that wear out, or the device will cease to work reliably. The specific commands used to lock, unlock, program, or erase NOR memories differ for each manufacturer. To avoid needing unique driver software for every device made, specialCommon Flash Memory Interface(CFI) commands allow the device to identify itself and its critical operating parameters. Besides its use as random-access ROM, NOR flash can also be used as a storage device, by taking advantage of random-access programming. Some devices offer read-while-write functionality so that code continues to execute even while a program or erase operation is occurring in the background. For sequential data writes, NOR flash chips typically have slow write speeds, compared with NAND flash. Typical NOR flash does not need anerror correcting code.[118] NAND flash architecture was introduced by Toshiba in 1989.[119]These memories are accessed much likeblock devices, such as hard disks. Each block consists of a number of pages. The pages are typically 512,[120]2,048, or 4,096 bytes in size. Associated with each page are a few bytes (typically 1/32 of the data size) that can be used for storage of anerror correcting code(ECC)checksum. Typical block sizes include: Modern NAND flash may have erase block size between 1 MiB to 128 MiB. While reading and programming is performed on a page basis, erasure can only be performed on a block basis.[123]Because change a cell from 0 to 1 needs to erase entire block, not just modify some pages, so modify the data of a block may need a read-erase-write process, and the new data is actually moved to another block. In addition, on aNVM ExpressZoned Namespaces SSD, it usually uses flash block size as the zone size. NAND devices also require bad block management by the device driver software or by theflash memory controllerchip. Some SD cards, for example, include controller circuitry to perform bad block management andwear leveling. When a logical block is accessed by high-level software, it is mapped to a physical block by the device driver or controller. A number of blocks on the flash chip may be set aside for storing mapping tables to deal with bad blocks, or the system may simply check each block at power-up to create a bad block map in RAM. The overall memory capacity gradually shrinks as more blocks are marked as bad. NAND relies on ECC to compensate for bits that may spontaneously fail during normal device operation. A typical ECC will correct a one-bit error in each 2048 bits (256 bytes) using 22 bits of ECC, or a one-bit error in each 4096 bits (512 bytes) using 24 bits of ECC.[124]If the ECC cannot correct the error during read, it may still detect the error. When doing erase or program operations, the device can detect blocks that fail to program or erase and mark them bad. The data is then written to a different, good block, and the bad block map is updated. Hamming codesare the most commonly used ECC for SLC NAND flash.Reed–Solomon codesandBCH codes(Bose–Chaudhuri–Hocquenghem codes) are commonly used ECC for MLC NAND flash. Some MLC NAND flash chips internally generate the appropriate BCH error correction codes.[118] Most NAND devices are shipped from the factory with some bad blocks. These are typically marked according to a specified bad block marking strategy. By allowing some bad blocks, manufacturers achieve far higheryieldsthan would be possible if all blocks had to be verified to be good. This significantly reduces NAND flash costs and only slightly decreases the storage capacity of the parts. When executing software from NAND memories,virtual memorystrategies are often used: memory contents must first bepagedor copied into memory-mapped RAM and executed there (leading to the common combination of NAND + RAM). Amemory management unit(MMU) in the system is helpful, but this can also be accomplished withoverlays. For this reason, some systems will use a combination of NOR and NAND memories, where a smaller NOR memory is used as software ROM and a larger NAND memory is partitioned with a file system for use as a non-volatile data storage area. NAND sacrifices the random-access and execute-in-place advantages of NOR. NAND is best suited to systems requiring high capacity data storage. It offers higher densities, larger capacities, and lower cost. It has faster erases, sequential writes, and sequential reads. A group called theOpen NAND Flash Interface Working Group(ONFI) has developed a standardized low-level interface for NAND flash chips. This allows interoperability between conforming NAND devices from different vendors. The ONFI specification version 1.0[125]was released on 28 December 2006. It specifies: The ONFI group is supported by major NAND flash manufacturers, includingHynix,Intel,Micron Technology, andNumonyx, as well as by major manufacturers of devices incorporating NAND flash chips.[126] Two major flash device manufacturers,ToshibaandSamsung, have chosen to use an interface of their own design known as Toggle Mode (and now Toggle). This interface isn'tpin-to-pin compatiblewith the ONFI specification. The result is that a product designed for one vendor's devices may not be able to use another vendor's devices.[127] A group of vendors, includingIntel,Dell, andMicrosoft, formed aNon-Volatile Memory Host Controller Interface(NVMHCI) Working Group.[128]The goal of the group is to provide standard software and hardware programming interfaces for nonvolatile memory subsystems, including the "flash cache" device connected to thePCI Expressbus. NOR and NAND flash differ in two important ways: NOR[135]and NAND flash get their names from the structure of the interconnections between memory cells.[136]In NOR flash, cells are connected in parallel to the bit lines, allowing cells to be read and programmed individually.[137]The parallel connection of cells resembles the parallel connection of transistors in a CMOS NOR gate.[138]In NAND flash, cells are connected in series,[137]resembling a CMOS NAND gate. The series connections consume less space than parallel ones, reducing the cost of NAND flash.[137]It does not, by itself, prevent NAND cells from being read and programmed individually.[citation needed] Each NOR flash cell is larger than a NAND flash cell – 10 F2vs 4 F2–[vague]even when using exactly the samesemiconductor device fabricationand so each transistor, contact, etc. is exactly the same size – because NOR flash cells require a separate metal contact for each cell.[139][140] Because of the series connection and removal of wordline contacts, a large grid of NAND flash memory cells will occupy perhaps only 60% of the area of equivalent NOR cells[141](assuming the sameCMOSprocess resolution, for example, 130nm, 90 nm, or 65 nm). NAND flash's designers realized that the area of a NAND chip, and thus the cost, could be further reduced by removing the external address and data bus circuitry. Instead, external devices could communicate with NAND flash via sequential-accessed command and data registers, which would internally retrieve and output the necessary data. This design choice made random-access of NAND flash memory impossible, but the goal of NAND flash was to replace mechanicalhard disks, not to replace ROMs. The firstGSMphones and manyfeature phoneshad NOR flash memory, from which processor instructions could be executed directly in an execute-in-place architecture and allowed for short boot times. With smartphones, NAND flash memory was adopted as it has larger storage capacities and lower costs, but causes longer boot times because instructions cannot be executed from it directly, and must be copied to RAM memory first before execution.[142] The write endurance of SLC floating-gate NOR flash is typically equal to or greater than that of NAND flash, while MLC NOR and NAND flash have similar endurance capabilities. Examples of endurance cycle ratings listed in datasheets for NAND and NOR flash, as well as in storage devices using flash memory, are provided.[144] However, by applying certain algorithms and design paradigms such aswear levelingandmemory over-provisioning, the endurance of a storage system can be tuned to serve specific requirements.[175] In order to compute the longevity of the NAND flash, one must account for the size of the memory chip, the type of memory (e.g. SLC/MLC/TLC), and use pattern. Industrial NAND and server NAND are in demand due to their capacity, longer endurance and reliability in sensitive environments. As the number of bits per cell increases, performance and life of NAND flash may degrade, increasing random read times to 100μs for TLC NAND which is 4 times the time required in SLC NAND, and twice the time required in MLC NAND, for random reads.[75] Because of the particular characteristics of flash memory, it is best used with either a controller to perform wear leveling and error correction or specifically designed flash file systems, which spread writes over the media and deal with the long erase times of NOR flash blocks. The basic concept behind flash file systems is the following: when the flash store is to be updated, the file system will write a new copy of the changed data to a fresh block, remap the file pointers, then erase the old block later when it has time. In practice, flash file systems are used only formemory technology devices(MTDs), which are embedded flash memories that do not have a controller. Removable flashmemory cards, SSDs,eMMC/eUFSchips andUSB flash driveshave built-incontrollersto perform wear leveling and error correction so use of a specific flash file system may not add benefit. Multiple chips are often arrayed or die stacked to achieve higher capacities[176]for use in consumer electronic devices such as multimedia players orGPSs. The capacity scaling (increase) of flash chips used to followMoore's lawbecause they are manufactured with many of the sameintegrated circuitstechniques and equipment. Since the introduction of 3D NAND, scaling is no longer necessarily associated with Moore's law since ever smaller transistors (cells) are no longer used. Consumer flash storage devices typically are advertised with usable sizes expressed as a small integer power of two (2, 4, 8, etc.) and a conventional designation of megabytes (MB) or gigabytes (GB); e.g., 512 MB, 8 GB. This includesSSDsmarketed as hard drive replacements, in accordance with traditionalhard drives, which usedecimal prefixes.[177]Thus, an SSD marked as "64GB" is at least64 × 10003bytes (64 GB). Most users will have slightly less capacity than this available for their files, due to the space taken by file system metadata and because some operating systems report SSD capacity usingbinary prefixeswhich are somewhat larger than conventional prefixes . The flash memory chips inside them are sized in strict binary multiples, but the actual total capacity of the chips is not usable at the drive interface. It is considerably larger than the advertised capacity in order to allow for distribution of writes (wear leveling), for sparing, forerror correction codes, and for othermetadataneeded by the device's internal firmware. In 2005, Toshiba andSanDiskdeveloped a NAND flash chip capable of storing 1 GB of data usingmulti-level cell(MLC) technology, capable of storing two bits of data per cell. In September 2005,Samsung Electronicsannounced that it had developed the world's first 2 GB chip.[178] In March 2006, Samsung announced flash hard drives with capacity of 4 GB, essentially the same order of magnitude as smaller laptop hard drives, and in September 2006, Samsung announced an 8 GB chip produced using a 40 nm manufacturing process.[179]In January 2008, SanDisk announced availability of their 16 GB MicroSDHC and 32 GB SDHC Plus cards.[180][181] More recent flash drives (as of 2012) have much greater capacities, holding 64, 128, and 256 GB.[182] A joint development at Intel and Micron will allow the production of 32-layer 3.5 terabyte (TB[clarification needed]) NAND flash sticks and 10 TB standard-sized SSDs. The device includes 5 packages of 16 × 48 GB TLC dies, using a floating gate cell design.[183] Flash chips continue to be manufactured with capacities under or around 1 MB (e.g. for BIOS-ROMs and embedded applications). In July 2016, Samsung announced the 4 TB[clarification needed]Samsung 850 EVO which utilizes their 256 Gbit 48-layer TLC 3D V-NAND.[184]In August 2016, Samsung announced a 32 TB 2.5-inch SAS SSD based on their 512 Gbit 64-layer TLC 3D V-NAND. Further, Samsung expects to unveil SSDs with up to 100 TB of storage by 2020.[185] Flash memory devices are typically much faster at reading than writing.[186]Performance also depends on the quality of storage controllers, which become more critical when devices are partially full.[vague][186]Even when the only change to manufacturing is die-shrink, the absence of an appropriate controller can result in degraded speeds.[187] Serial flash is a small, low-power flash memory that provides only serial access to the data - rather than addressing individual bytes, the user reads or writes large contiguous groups of bytes in the address space serially.Serial Peripheral Interface Bus(SPI) is a typical protocol for accessing the device. When incorporated into anembedded system, serial flash requires fewer wires on thePCBthan parallel flash memories, since it transmits and receives data one bit at a time. This may permit a reduction in board space, power consumption, and total system cost. There are several reasons why a serial device, with fewer external pins than a parallel device, can significantly reduce overall cost: There are two major SPI flash types. The first type is characterized by small blocks and one internal SRAM block buffer allowing a complete block to be read to the buffer, partially modified, and then written back (for example, the AtmelAT45DataFlashor theMicron TechnologyPage Erase NOR Flash). The second type has larger sectors where the smallest sectors typically found in this type of SPI flash are 4 KB, but they can be as large as 64 KB. Since this type of SPI flash lacks an internal SRAM buffer, the complete block must be read out and modified before being written back, making it slow to manage. However, the second type is cheaper than the first and is therefore a good choice when the application is code shadowing. The two types are not easily exchangeable, since they do not have the same pinout, and the command sets are incompatible. MostFPGAsare based on SRAM configuration cells and require an external configuration device, often a serial flash chip, to reload the configurationbitstreamevery power cycle.[188] With the increasing speed of modern CPUs, parallel flash devices are often much slower than the memory bus of the computer they are connected to. Conversely, modernSRAMoffers access times below 10ns, whileDDR2 SDRAMoffers access times below 20 ns. Because of this, it is often desirable toshadowcode stored in flash into RAM; that is, the code is copied from flash into RAM before execution, so that the CPU may access it at full speed. Devicefirmwaremay be stored in a serial flash chip, and then copied into SDRAM or SRAM when the device is powered-up.[189]Using an external serial flash device rather than on-chip flash removes the need for significant process compromise (a manufacturing process that is good for high-speed logic is generally not good for flash and vice versa). Once it is decided to read the firmware in as one big block it is common to add compression to allow a smaller flash chip to be used. Since 2005, many devices use serial NOR flash to deprecate parallel NOR flash for firmware storage. Typical applications for serial NOR flash include storing firmware forhard drives,BIOS,Option ROMofexpansion cards,DSL modems, etc. One more recent application for flash memory is as a replacement forhard disks. Flash memory does not have the mechanical limitations and latencies of hard drives, so asolid-state drive(SSD) is attractive in terms of speed, noise, power consumption, and reliability. Flash drives are gaining traction as mobile device secondary storage devices; they are also used as substitutes for hard drives in high-performance desktop computers and some servers withRAIDandSANarchitectures. There remain some aspects of flash-based SSDs that make them unattractive. The cost per gigabyte of flash memory remains significantly higher than that of hard disks.[190]Also, flash memory has a finite number of P/E (program/erase) cycles, but this seems to be currently under control since warranties on flash-based SSDs are approaching those of current hard drives.[191]In addition, deleted files on SSDs can remain for an indefinite period of time before being overwritten by fresh data; erasure or shred techniques or software that work well on magnetic hard disk drives have no effect on SSDs, compromising security and forensic examination. However, due to the so-calledTRIMcommand employed by most solid state drives, which marks the logical block addresses occupied by the deleted file as unused to enablegarbage collection, data recovery software is not able to restore files deleted from such. For relational databases or other systems that requireACIDtransactions, even a modest amount of flash storage can offer vast speedups over arrays of disk drives.[192] In May 2006,Samsung Electronicsannounced two flash-memory based PCs, the Q1-SSD and Q30-SSD were expected to become available in June 2006, both of which used 32 GB SSDs, and were at least initially available only inSouth Korea.[193]The Q1-SSD and Q30-SSD launch was delayed and finally was shipped in late August 2006.[194] The first flash-memory based PC to become available was the Sony Vaio UX90, announced for pre-order on 27 June 2006 and began to be shipped in Japan on 3 July 2006 with a 16 GB flash memory hard drive.[195]In late September 2006 Sony upgraded the flash-memory in the Vaio UX90 to 32 GB.[196] A solid-state drive was offered as an option with the firstMacBook Airintroduced in 2008, and from 2010 onwards, all models were shipped with an SSD. Starting in late 2011, as part ofIntel'sUltrabookinitiative, an increasing number of ultra-thin laptops are being shipped with SSDs standard. There are also hybrid techniques such ashybrid driveandReadyBoostthat attempt to combine the advantages of both technologies, using flash as a high-speed non-volatilecachefor files on the disk that are often referenced, but rarely modified, such as application and operating systemexecutablefiles. Onsmartphones, the NAND flash products are used as file storage device, for example,eMMCandeUFS. As of 2012,[update]there are attempts to use flash memory as the main computer memory,DRAM.[197] Floating-gate transistors in the flash storage device hold charge which represents data. This charge gradually leaks over time, leading to an accumulation oflogical errors, also known as "bit rot" or "bit fading".[198] It is unclear how long data on flash memory will persist under archival conditions (i.e., benign temperature and humidity with infrequent access with or without prophylactic rewrite). Datasheets of Atmel's flash-based "ATmega" microcontrollers typically promise retention times of 20 years at 85 °C (185 °F) and 100 years at 25 °C (77 °F).[199] The retention span varies among types and models of flash storage. When supplied with power and idle, the charge of the transistors holding the data is routinely refreshed by thefirmwareof the flash storage.[198]The ability to retain data varies among flash storage devices due to differences in firmware,data redundancy, anderror correctionalgorithms.[200] An article fromCMUin 2015 states "Today's flash devices, which do not require flash refresh, have a typical retention age of 1 year at room temperature." And that retention time decreases exponentially with increasing temperature. The phenomenon can be modeled by theArrhenius equation.[201][202] SomeFPGAsare based on flash configuration cells that are used directly as (programmable) switches to connect internal elements together, using the same kind of floating-gate transistor as the flash data storage cells in data storage devices.[188] One source states that, in 2008, the flash memory industry includes about US$9.1 billion in production and sales. Other sources put the flash memory market at a size of more than US$20 billion in 2006, accounting for more than eight percent of the overall semiconductor market and more than 34 percent of the total semiconductor memory market.[203]In 2012, the market was estimated at $26.8 billion.[204]It can take up to 10 weeks to produce a flash memory chip.[205] The following were the largest NAND flash memory manufacturers, as of the second quarter of 2023.[206] Notes: Samsung remains the largest NAND flash memory manufacturer as of Q1 2022.[207] Kioxia spun out and got renamed of Toshiba in 2018/2019.[208] SK Hynix acquired Intel's NAND business at the end of 2021.[209] In addition to individual flash memory chips, flash memory is alsoembeddedinmicrocontroller(MCU) chips andsystem-on-chip(SoC) devices.[226]Flash memory is embedded inARM chips,[226]which have sold 150billion units worldwide as of 2019[update],[227]and inprogrammable system-on-chip(PSoC) devices, which have sold 1.1billion units as of 2012[update].[228]This adds up to at least 151.1billion MCU and SoC chips with embedded flash memory, in addition to the 45.4billion known individual flash chip sales as of 2015[update], totalling at least 196.5billion chips containing flash memory. Due to its relatively simple structure and high demand for higher capacity, NAND flash memory is the most aggressivelyscaled technologyamongelectronic devices. The heavy competition among the top few manufacturers only adds to the aggressiveness in shrinking thefloating-gate MOSFETdesign rule or process technology node.[111]While the expected shrink timeline is a factor of two every three years per the original version ofMoore's law, this has recently been accelerated in the case of NAND flash to a factor of two every two years. As theMOSFETfeature size of flash memory cells reaches the 15–16 nm minimum limit, further flash density increases will be driven by TLC (3 bits/cell) combined with vertical stacking of NAND memory planes. The decrease in endurance and increase in uncorrectable bit error rates that accompany feature size shrinking can be compensated by improved error correction mechanisms.[234]Even with these advances, it may be impossible to economically scale flash to smaller and smaller dimensions as the number of electron holding capacity reduces. Many promising new technologies (such asFeRAM,MRAM,PMC,PCM,ReRAM, and others) are under investigation and development as possible more scalable replacements for flash.[235]
https://en.wikipedia.org/wiki/Flash_memory
Alogic gateis a device that performs aBoolean function, alogical operationperformed on one or morebinaryinputs that produces a single binary output. Depending on the context, the term may refer to anideal logic gate, one that has, for instance, zerorise timeand unlimitedfan-out, or it may refer to a non-ideal physical device[1](seeideal and real op-ampsfor comparison). The primary way of building logic gates usesdiodesortransistorsacting aselectronic switches. Today, most logic gates are made fromMOSFETs(metal–oxide–semiconductorfield-effect transistors).[2]They can also be constructed usingvacuum tubes, electromagneticrelayswithrelay logic,fluidic logic,pneumatic logic,optics,molecules, acoustics,[3]or evenmechanicalor thermal[4]elements. Logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all ofBoolean logic, and therefore, all of the algorithms andmathematicsthat can be described with Boolean logic.Logic circuitsinclude such devices asmultiplexers,registers,arithmetic logic units(ALUs), andcomputer memory, all the way up through completemicroprocessors,[5]which may contain more than 100 million logic gates. Compound logic gatesAND-OR-Invert(AOI) andOR-AND-Invert(OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.[6] Thebinary number systemwas refined byGottfried Wilhelm Leibniz(published in 1705), influenced by the ancientI Ching's binary system.[7][8]Leibniz established that using the binary system combined the principles ofarithmeticandlogic. Theanalytical enginedevised byCharles Babbagein 1837 used mechanical logic gates based on gears.[9] In an 1886 letter,Charles Sanders Peircedescribed how logical operations could be carried out by electrical switching circuits.[10]EarlyElectromechanical computerswere constructed fromswitchesandrelay logicrather than the later innovations ofvacuum tubes(thermionic valves) ortransistors(from which later electronic computers were constructed).Ludwig Wittgensteinintroduced a version of the 16-rowtruth tableas proposition 5.101 ofTractatus Logico-Philosophicus(1921).Walther Bothe, inventor of thecoincidence circuit,[11]got part of the 1954Nobel Prizein physics, for the first modern electronic AND gate in 1924.Konrad Zusedesigned and built electromechanical logic gates for his computerZ1(from 1935 to 1938). From 1934 to 1936,NECengineerAkira Nakashima,Claude ShannonandVictor Shestakovintroducedswitching circuit theoryin a series of papers showing thattwo-valuedBoolean algebra, which they discovered independently, can describe the operation of switching circuits.[12][13][14][15]Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digitalcomputers. Switching circuit theory became the foundation ofdigital circuitdesign, as it became widely known in the electrical engineering community during and afterWorld War II, with theoretical rigor superseding thead hocmethods that had prevailed previously.[15] In 1948,BardeenandBrattainpatented an insulated-gate transistor (IGFET) with an inversion layer. Their concept forms the basis of CMOS technology today.[16]In 1957 Frosch and Derick were able to manufacturePMOSandNMOSplanar gates.[17]Later a team at Bell Labs demonstrated a working MOS with PMOS and NMOS gates.[18]Both types were later combined and adapted intocomplementary MOS(CMOS) logic byChih-Tang SahandFrank WanlassatFairchild Semiconductorin 1963.[19] There are two sets of symbols for elementary logic gates in common use, both defined inANSI/IEEEStd 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives fromUnited States Military StandardMIL-STD-806 of the 1950s and 1960s.[20]It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols.[21]The IEC standard,IEC60617-12, has been adopted by other standards, such asEN60617-12:1999 in Europe,BSEN 60617-12:1999 in the United Kingdom, andDINEN 60617-12:1998 in Germany. The mutual goal of IEEE Std 91-1984 and IEC 617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium-scale circuits such as a 4-bit counter to a large-scale circuit such as a microprocessor. IEC 617-12 and its renumbered successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them.[21]These are, however, shown in ANSI/IEEE Std 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another. In the 1980s, schematics were the predominant method to design bothcircuit boardsand custom ICs known asgate arrays. Today custom ICs and thefield-programmable gate arrayare typically designed withHardware Description Languages(HDL) such asVerilogorVHDL. By use ofDe Morgan's laws, anANDfunction is identical to anORfunction with negated inputs and outputs. Likewise, anORfunction is identical to anANDfunction with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs. This leads to an alternative set of symbols for basic gates that use the opposite core symbol (ANDorOR) but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated. A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor. De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons. Output comparison of various logic gates: Charles Sanders Peirce(during 1880–1881) showed thatNOR gates alone(or alternativelyNAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933.[24]The first published proof was byHenry M. Shefferin 1913, so the NAND logical operation is sometimes calledSheffer stroke; thelogical NORis sometimes calledPeirce's arrow.[25]Consequently, these gates are sometimes calleduniversal logic gates.[26] Logic gates can also be used to hold a state, allowing data storage. A storage element can be constructed by connecting several gates in a "latch" circuit. Latching circuitry is used instatic random-access memory. More complicated designs that useclock signalsand that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called abistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called asequential logicsystem since its output can be influenced by its previous state(s), i.e. by thesequenceof input states. In contrast, the output fromcombinational logicis purely a combination of its present inputs, unaffected by the previous input and output states. These logic circuits are used in computermemory. They vary in performance, based on factors ofspeed, complexity, and reliability of storage, and many different types of designs are used based on the application. Afunctionally completelogic system may be composed ofrelays,valves(vacuum tubes), ortransistors. Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gainvoltageamplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate. For small-scale logic, designers now use prefabricated logic gates from families of devices such as theTTL7400 seriesbyTexas Instruments, theCMOS4000 seriesbyRCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced byprogrammable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature ofprogrammable logic devicessuch asFPGAshas reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed. An important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered. The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-outlimit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speedsynchronous circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributedcapacitanceof all the inputs and wiring and the finite amount of current that each output can provide. There are severallogic familieswith different characteristics (power consumption, speed, cost, size) such as:RDL(resistor–diode logic),RTL(resistor-transistor logic),DTL(diode–transistor logic),TTL(transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors. The simplest family of logic gates usesbipolar transistors, and is calledresistor–transistor logic(RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in earlyintegrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting indiode–transistor logic(DTL).Transistor–transistor logic(TTL) then supplanted DTL. As integrated circuits became more complex, bipolar transistors were replaced with smallerfield-effect transistors(MOSFETs); seePMOSandNMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now useCMOSlogic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation. Other types of logic gates include, but are not limited to:[27] A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used onbusesof theCPUto allow multiple chips to send data. A group of three-state outputs driving a line with a suitable control circuit is basically equivalent to amultiplexer, which may be physically distributed over separate devices or plug-in cards. In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit. Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as theHarvard Mark I, were built fromrelay logicgates, using electro-mechanicalrelays. Logic gates can be made usingpneumaticdevices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale.[29]Various types of fundamental logic gates have been constructed using molecules (molecular logic gates), which are based on chemical inputs and spectroscopic outputs.[30]Logic gates have been made out ofDNA(seeDNA nanotechnology)[31]and used to create a computer called MAYA (seeMAYA-II). Logic gates can be made fromquantum mechanicaleffects, seequantum logic gate.Photonic logicgates usenonlinear opticaleffects. In principle any method that leads to a gate that isfunctionally complete(for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates).
https://en.wikipedia.org/wiki/Logic_gate#Symbols
TheNAND Boolean functionhas the property offunctional completeness. This means that any Boolean expression can be re-expressed by anequivalentexpression utilizingonlyNANDoperations. For example, the function NOT(x) may be equivalently expressed as NAND(x,x). In the field ofdigital electronic circuits, this implies that it is possible to implement anyBoolean functionusing justNAND gates. The mathematical proof for this was published byHenry M. Shefferin 1913 in theTransactions of the American Mathematical Society(Sheffer 1913). A similar case applies to theNOR function, and this is referred to asNOR logic. A NAND gate is an invertedAND gate. It has the following truth table: Q=ANANDB InCMOSlogic, if both of the A and B inputs are high, then both theNMOStransistors(bottom half of the diagram) will conduct, neither of thePMOStransistors (top half) will conduct, and a conductive path will be established between the output and Vss (ground), bringing the output low. If both of the A and B inputs are low, then neither of the NMOS transistors will conduct, while both of the PMOS transistors will conduct, establishing a conductive path between the output and Vdd (voltage source), bringing the output high. If either of the A or B inputs is low, one of the NMOS transistors will not conduct, one of the PMOS transistors will, and a conductive path will be established between the output and Vdd (voltage source), bringing the output high. As the only configuration of the two inputs that results in a low output is when both are high, this circuit implements a NAND (NOT AND) logic gate. A NAND gate is auniversal gate, meaning that any other gate can be represented as a combination of NAND gates. A NOT gate is made by joining the inputs of a NAND gate together. Since a NAND gate is equivalent to an AND gate followed by a NOT gate, joining the inputs of a NAND gate leaves only the NOT gate. An AND gate is made by inverting the output of a NAND gate as shown below. If the truth table for a NAND gate is examined or by applyingDe Morgan's laws, it can be seen that if any of the inputs are 0, then the output will be 1. To be an OR gate, however, the output must be 1 if any input is 1. Therefore, if the inputs are inverted, any high input will trigger a high output. A NOR gate is an OR gate with an inverted output. Output is high when neither input A nor input B is high. An XOR gate is made by connecting four NAND gates as shown below. This construction entails a propagation delay three times that of a single NAND gate. Alternatively, an XOR gate is made by considering thedisjunctive normal formA⋅B¯+A¯⋅B{\displaystyle A\cdot {\overline {B}}+{\overline {A}}\cdot B}, noting fromde Morgan's lawthat a NAND gate is an inverted-input OR gate. This construction uses five gates instead of four. An XNOR gate is made by considering thedisjunctive normal formA⋅B+A¯⋅B¯{\displaystyle A\cdot B+{\overline {A}}\cdot {\overline {B}}}, noting fromde Morgan's lawthat a NAND gate is an inverted-input OR gate. This construction entails a propagation delay three times that of a single NAND gate and uses five gates. Alternatively, the 4-gate version of the XOR gate can be used with an inverter. This construction has a propagation delay four times (instead of three times) that of a single NAND gate. Amultiplexeror a MUX gate is a three-input gate that uses one of the inputs, called theselector bit,to select one of the other two inputs, calleddata bits, and outputs only the selected data bit.[1] A demultiplexer performs the opposite function of a multiplexer: It takes a single input and channels it to one of two possible outputs according to a selector bit that specifies which output to choose.[1][copyright violation?]
https://en.wikipedia.org/wiki/NAND_logic
Inlogicandmathematics, thelogical biconditional, also known asmaterial biconditionalorequivalenceorbidirectional implicationorbiimplicationorbientailment, is thelogical connectiveused to conjoin two statementsP{\displaystyle P}andQ{\displaystyle Q}to form the statement "P{\displaystyle P}if and only ifQ{\displaystyle Q}" (often abbreviated as "P{\displaystyle P}iffQ{\displaystyle Q}"[1]), whereP{\displaystyle P}is known as theantecedent, andQ{\displaystyle Q}theconsequent.[2][3] Nowadays, notations to represent equivalence include↔,⇔,≡{\displaystyle \leftrightarrow ,\Leftrightarrow ,\equiv }. P↔Q{\displaystyle P\leftrightarrow Q}is logically equivalent to both(P→Q)∧(Q→P){\displaystyle (P\rightarrow Q)\land (Q\rightarrow P)}and(P∧Q)∨(¬P∧¬Q){\displaystyle (P\land Q)\lor (\neg P\land \neg Q)}, and theXNOR(exclusive NOR)Boolean operator, which means "both or neither". Semantically, the only case where a logical biconditional is different from amaterial conditionalis the case where the hypothesis (antecedent) is false but the conclusion (consequent) is true. In this case, the result is true for the conditional, but false for the biconditional.[2] In the conceptual interpretation,P=Qmeans "AllP's areQ's and allQ's areP's". In other words, the setsPandQcoincide: they are identical. However, this does not mean thatPandQneed to have the same meaning (e.g.,Pcould be "equiangular trilateral" andQcould be "equilateral triangle"). When phrased as a sentence, the antecedent is thesubjectand the consequent is thepredicateof auniversal affirmativeproposition (e.g., in the phrase "all men are mortal", "men" is the subject and "mortal" is the predicate). In the propositional interpretation,P↔Q{\displaystyle P\leftrightarrow Q}means thatPimpliesQandQimpliesP; in other words, the propositions arelogically equivalent, in the sense that both are either jointly true or jointly false. Again, this does not mean that they need to have the same meaning, asPcould be "the triangle ABC has two equal sides" andQcould be "the triangle ABC has two equal angles". In general, the antecedent is thepremise, or thecause, and the consequent is theconsequence. When an implication is translated by ahypothetical(orconditional) judgment, the antecedent is called thehypothesis(or thecondition) and the consequent is called thethesis. A common way of demonstrating a biconditional of the formP↔Q{\displaystyle P\leftrightarrow Q}is to demonstrate thatP→Q{\displaystyle P\rightarrow Q}andQ→P{\displaystyle Q\rightarrow P}separately (due to its equivalence to the conjunction of the two converseconditionals[2]). Yet another way of demonstrating the same biconditional is by demonstrating thatP→Q{\displaystyle P\rightarrow Q}and¬P→¬Q{\displaystyle \neg P\rightarrow \neg Q}. When both members of the biconditional are propositions, it can be separated into two conditionals, of which one is called atheoremand the other itsreciprocal.[citation needed]Thus whenever a theorem and its reciprocal are true, we have a biconditional. A simple theorem gives rise to an implication, whose antecedent is thehypothesisand whose consequent is thethesisof the theorem. It is often said that the hypothesis is thesufficient conditionof the thesis, and that the thesis is thenecessary conditionof the hypothesis. That is, it is sufficient that the hypothesis be true for the thesis to be true, while it is necessary that the thesis be true if the hypothesis were true. When a theorem and its reciprocal are true, its hypothesis is said to be thenecessary and sufficient conditionof the thesis. That is, the hypothesis is both the cause and the consequence of the thesis at the same time. Notations to represent equivalence used in history include: and so on. Somebody else also useEQ{\displaystyle \operatorname {EQ} }orEQV{\displaystyle \operatorname {EQV} }occasionally.[citation needed][vague][clarification needed] Logical equality(also known as biconditional) is anoperationon twological values, typically the values of twopropositions, that produces a value oftrueif and only if both operands are false or both operands are true.[2] The following is a truth table forA↔B{\displaystyle A\leftrightarrow B}: When more than two statements are involved, combining them with↔{\displaystyle \leftrightarrow }might be ambiguous. For example, the statement may be interpreted as or may be interpreted as saying that allxiarejointly true or jointly false: As it turns out, these two statements are only the same when zero or two arguments are involved. In fact, the following truth tables only show the same bit pattern in the line with no argument and in the lines with two arguments: The left Venn diagram below, and the lines(AB    )in these matrices represent the same operation. Red areas stand for true (as inforand). ⇔¬{\displaystyle \Leftrightarrow \neg } A↔B↔C⇔{\displaystyle ~A\leftrightarrow B\leftrightarrow C~~\Leftrightarrow }A⊕B⊕C{\displaystyle ~A\oplus B\oplus C} ↔{\displaystyle \leftrightarrow }⇔{\displaystyle ~~\Leftrightarrow ~~} ⊕{\displaystyle \oplus }⇔{\displaystyle ~~\Leftrightarrow ~~} ∧{\displaystyle \land }⇔{\displaystyle ~~\Leftrightarrow ~~} Commutativity: Yes Associativity: Yes Distributivity:Biconditional doesn't distribute over any binary function (not even itself), butlogical disjunction distributesover biconditional. Idempotency: No Monotonicity: No Truth-preserving: YesWhen all inputs are true, the output is true. Falsehood-preserving: NoWhen all inputs are false, the output is not false. Walsh spectrum: (2,0,0,2) Nonlinearity: 0(the function is linear) Like all connectives in first-order logic, the biconditional has rules of inference that govern its use in formal proofs. Biconditional introduction allows one to infer that if B follows from A and A follows from B, then Aif and only ifB. For example, from the statements "if I'm breathing, then I'm alive" and "if I'm alive, then I'm breathing", it can be inferred that "I'm breathing if and only if I'm alive" or equivalently, "I'm alive if and only if I'm breathing." Or more schematically: Biconditional elimination allows one to infer aconditionalfrom a biconditional: if A↔B is true, then one may infer either A→B, or B→A. For example, if it is true that I'm breathingif and only ifI'm alive, then it's true thatifI'm breathing, then I'm alive; likewise, it's true thatifI'm alive, then I'm breathing. Or more schematically: One unambiguous way of stating a biconditional in plain English is to adopt the form "bifaandaifb"—if the standard form "aif and only ifb" is not used. Slightly more formally, one could also say that "bimpliesaandaimpliesb", or "ais necessary and sufficient forb". The plain English "if'" may sometimes be used as a biconditional (especially in the context of a mathematical definition[15]). In which case, one must take into consideration the surrounding context when interpreting these words. For example, the statement "I'll buy you a new wallet if you need one" may be interpreted as a biconditional, since the speaker doesn't intend a valid outcome to be buying the wallet whether or not the wallet is needed (as in a conditional). However, "it is cloudy if it is raining" is generally not meant as a biconditional, since it can still be cloudy even if it is not raining. This article incorporates material from Biconditional onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Logical_biconditional
↔⇔≡⟺Logical symbols representingiff Inlogicand related fields such asmathematicsandphilosophy, "if and only if" (often shortened as "iff") is paraphrased by thebiconditional, alogical connective[1]between statements. The biconditional is true in two cases, where either both statements are true or both are false. The connective isbiconditional(a statement ofmaterial equivalence),[2]and can be likened to the standardmaterial conditional("only if", equal to "if ... then") combined with its reverse ("if"); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false), though it is controversial whether the connective thus defined is properly rendered by the English "if and only if"—with its pre-existing meaning. For example,P if and only if Qmeans thatPis true wheneverQis true, and the only case in whichPis true is ifQis also true, whereas in the case ofP if Q, there could be other scenarios wherePis true andQis false. In writing, phrases commonly used as alternatives to P "if and only if" Q include:Q isnecessary and sufficientfor P,for P it is necessary and sufficient that Q,P is equivalent (or materially equivalent) to Q(compare withmaterial implication),P precisely if Q,P precisely (or exactly) when Q,P exactly in case Q, andP just in case Q.[3]Some authors regard "iff" as unsuitable in formal writing;[4]others consider it a "borderline case" and tolerate its use.[5]Inlogical formulae, logical symbols, such as↔{\displaystyle \leftrightarrow }and⇔{\displaystyle \Leftrightarrow },[6]are used instead of these phrases; see§ Notationbelow. Thetruth tableofP↔{\displaystyle \leftrightarrow }Qis as follows:[7][8] It is equivalent to that produced by theXNOR gate, and opposite to that produced by theXOR gate.[9] The corresponding logical symbols are "↔{\displaystyle \leftrightarrow }", "⇔{\displaystyle \Leftrightarrow }",[6]and≡{\displaystyle \equiv },[10]and sometimes "iff". These are usually treated as equivalent. However, some texts ofmathematical logic(particularly those onfirst-order logic, rather thanpropositional logic) make a distinction between these, in which the first,↔{\displaystyle \leftrightarrow }, is used as a symbol in logic formulas, while⇔{\displaystyle \Leftrightarrow }or≡{\displaystyle \equiv }is used in reasoning about those logic formulas (e.g., inmetalogic). InŁukasiewicz'sPolish notation, it is the prefix symbolE{\displaystyle E}.[11] Another term for thelogical connective, i.e., the symbol in logic formulas, isexclusive nor. InTeX, "if and only if" is shown as a long double arrow:⟺{\displaystyle \iff }via command \iff or \Longleftrightarrow.[12] In mostlogical systems, oneprovesa statement of the form "P iff Q" by proving either "if P, then Q" and "if Q, then P", or "if P, then Q" and "if not-P, then not-Q". Proving these pairs of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove thedisjunction"(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" istruth-functional, "P iff Q" follows if P and Q have been shown to be both true, or both false. Usage of the abbreviation "iff" first appeared in print inJohn L. Kelley's 1955 bookGeneral Topology.[13]Its invention is often credited toPaul Halmos, who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor."[14] It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface ofGeneral Topology, Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' andeuphonydemands something less I use Halmos' 'iff'". The authors of one discrete mathematics textbook suggest:[15]"Should you need to pronounce iff, reallyhang on to the 'ff'so that people hear the difference from 'if'", implying that "iff" could be pronounced as[ɪfː]. Conventionally,definitionsare "if and only if" statements; some texts — such as Kelley'sGeneral Topology— follow this convention, and use "if and only if" oriffin definitions of new terms.[16]However, this usage of "if and only if" is relatively uncommon and overlooks the linguistic fact that the "if" of a definition is interpreted as meaning "if and only if". The majority of textbooks, research papers and articles (including English Wikipedia articles) follow the linguistic convention of interpreting "if" as "if and only if" whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover").[17]Moreover, in the case of arecursive definition, theonly ifhalf of the definition is interpreted as a sentence in the metalanguage stating that the sentences in the definition of a predicate are theonly sentencesdetermining the extension of the predicate. Euler diagramsshow logical relationships among events, properties, and so forth. "P only if Q", "if P then Q", and "P→Q" all mean that P is asubset, either proper or improper, of Q. "P if Q", "if Q then P", and Q→P all mean that Q is a proper or improper subset of P. "P if and only if Q" and "Q if and only if P" both mean that the sets P and Q are identical to each other. Iffis used outside the field of logic as well. Wherever logic is applied, especially inmathematicaldiscussions, it has the same meaning as above: it is an abbreviation forif and only if, indicating that one statement is bothnecessary and sufficientfor the other. This is an example ofmathematical jargon(although, as noted above,ifis more often used thaniffin statements of definition). The elements ofXareall and onlythe elements ofYmeans: "For anyzin thedomain of discourse,zis inXif and only ifzis inY." In theirArtificial Intelligence: A Modern Approach,RussellandNorvignote (page 282),[18]in effect, that it is often more natural to expressif and only ifasiftogether with a "database (or logic programming) semantics". They give the example of the English sentence "Richard has two brothers, Geoffrey and John". In adatabaseorlogic program, this could be represented simply by two sentences: The database semantics interprets the database (or program) as containingallandonlythe knowledge relevant for problem solving in a given domain. It interpretsonly ifas expressing in the metalanguage that the sentences in the database represent theonlyknowledge that should be considered when drawing conclusions from the database. Infirst-order logic(FOL) with the standard semantics, the same English sentence would need to be represented, usingif and only if, withonly ifinterpreted in the object language, in some such form as: Compared with the standard semantics for FOL, the database semantics has a more efficient implementation. Instead of reasoning with sentences of the form: it uses sentences of the form: toreason forwardsfromconditionstoconclusionsorbackwardsfromconclusionstoconditions. The database semantics is analogous to the legal principleexpressio unius est exclusio alterius(the express mention of one thing excludes all others). Moreover, it underpins the application of logic programming to the representation of legal texts and legal reasoning.[19]
https://en.wikipedia.org/wiki/If_and_only_if
TheNIMPLY gateis a digitallogic gatethat implements amaterial nonimplication. A right-facing arrow with a line through it (↛{\displaystyle \nrightarrow }) can be used to denote NIMPLY in algebraic expressions. Logically, it is equivalent tomaterial nonimplication, and the logical expression A ∧ ¬B. The NIMPLY gate is often used insynthetic biologyandgenetic circuits.[1] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/NIMPLY_gate
Alogic gateis a device that performs aBoolean function, alogical operationperformed on one or morebinaryinputs that produces a single binary output. Depending on the context, the term may refer to anideal logic gate, one that has, for instance, zerorise timeand unlimitedfan-out, or it may refer to a non-ideal physical device[1](seeideal and real op-ampsfor comparison). The primary way of building logic gates usesdiodesortransistorsacting aselectronic switches. Today, most logic gates are made fromMOSFETs(metal–oxide–semiconductorfield-effect transistors).[2]They can also be constructed usingvacuum tubes, electromagneticrelayswithrelay logic,fluidic logic,pneumatic logic,optics,molecules, acoustics,[3]or evenmechanicalor thermal[4]elements. Logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all ofBoolean logic, and therefore, all of the algorithms andmathematicsthat can be described with Boolean logic.Logic circuitsinclude such devices asmultiplexers,registers,arithmetic logic units(ALUs), andcomputer memory, all the way up through completemicroprocessors,[5]which may contain more than 100 million logic gates. Compound logic gatesAND-OR-Invert(AOI) andOR-AND-Invert(OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.[6] Thebinary number systemwas refined byGottfried Wilhelm Leibniz(published in 1705), influenced by the ancientI Ching's binary system.[7][8]Leibniz established that using the binary system combined the principles ofarithmeticandlogic. Theanalytical enginedevised byCharles Babbagein 1837 used mechanical logic gates based on gears.[9] In an 1886 letter,Charles Sanders Peircedescribed how logical operations could be carried out by electrical switching circuits.[10]EarlyElectromechanical computerswere constructed fromswitchesandrelay logicrather than the later innovations ofvacuum tubes(thermionic valves) ortransistors(from which later electronic computers were constructed).Ludwig Wittgensteinintroduced a version of the 16-rowtruth tableas proposition 5.101 ofTractatus Logico-Philosophicus(1921).Walther Bothe, inventor of thecoincidence circuit,[11]got part of the 1954Nobel Prizein physics, for the first modern electronic AND gate in 1924.Konrad Zusedesigned and built electromechanical logic gates for his computerZ1(from 1935 to 1938). From 1934 to 1936,NECengineerAkira Nakashima,Claude ShannonandVictor Shestakovintroducedswitching circuit theoryin a series of papers showing thattwo-valuedBoolean algebra, which they discovered independently, can describe the operation of switching circuits.[12][13][14][15]Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digitalcomputers. Switching circuit theory became the foundation ofdigital circuitdesign, as it became widely known in the electrical engineering community during and afterWorld War II, with theoretical rigor superseding thead hocmethods that had prevailed previously.[15] In 1948,BardeenandBrattainpatented an insulated-gate transistor (IGFET) with an inversion layer. Their concept forms the basis of CMOS technology today.[16]In 1957 Frosch and Derick were able to manufacturePMOSandNMOSplanar gates.[17]Later a team at Bell Labs demonstrated a working MOS with PMOS and NMOS gates.[18]Both types were later combined and adapted intocomplementary MOS(CMOS) logic byChih-Tang SahandFrank WanlassatFairchild Semiconductorin 1963.[19] There are two sets of symbols for elementary logic gates in common use, both defined inANSI/IEEEStd 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives fromUnited States Military StandardMIL-STD-806 of the 1950s and 1960s.[20]It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols.[21]The IEC standard,IEC60617-12, has been adopted by other standards, such asEN60617-12:1999 in Europe,BSEN 60617-12:1999 in the United Kingdom, andDINEN 60617-12:1998 in Germany. The mutual goal of IEEE Std 91-1984 and IEC 617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium-scale circuits such as a 4-bit counter to a large-scale circuit such as a microprocessor. IEC 617-12 and its renumbered successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them.[21]These are, however, shown in ANSI/IEEE Std 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another. In the 1980s, schematics were the predominant method to design bothcircuit boardsand custom ICs known asgate arrays. Today custom ICs and thefield-programmable gate arrayare typically designed withHardware Description Languages(HDL) such asVerilogorVHDL. By use ofDe Morgan's laws, anANDfunction is identical to anORfunction with negated inputs and outputs. Likewise, anORfunction is identical to anANDfunction with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs. This leads to an alternative set of symbols for basic gates that use the opposite core symbol (ANDorOR) but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated. A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor. De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons. Output comparison of various logic gates: Charles Sanders Peirce(during 1880–1881) showed thatNOR gates alone(or alternativelyNAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933.[24]The first published proof was byHenry M. Shefferin 1913, so the NAND logical operation is sometimes calledSheffer stroke; thelogical NORis sometimes calledPeirce's arrow.[25]Consequently, these gates are sometimes calleduniversal logic gates.[26] Logic gates can also be used to hold a state, allowing data storage. A storage element can be constructed by connecting several gates in a "latch" circuit. Latching circuitry is used instatic random-access memory. More complicated designs that useclock signalsand that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called abistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called asequential logicsystem since its output can be influenced by its previous state(s), i.e. by thesequenceof input states. In contrast, the output fromcombinational logicis purely a combination of its present inputs, unaffected by the previous input and output states. These logic circuits are used in computermemory. They vary in performance, based on factors ofspeed, complexity, and reliability of storage, and many different types of designs are used based on the application. Afunctionally completelogic system may be composed ofrelays,valves(vacuum tubes), ortransistors. Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gainvoltageamplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate. For small-scale logic, designers now use prefabricated logic gates from families of devices such as theTTL7400 seriesbyTexas Instruments, theCMOS4000 seriesbyRCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced byprogrammable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature ofprogrammable logic devicessuch asFPGAshas reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed. An important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered. The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-outlimit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speedsynchronous circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributedcapacitanceof all the inputs and wiring and the finite amount of current that each output can provide. There are severallogic familieswith different characteristics (power consumption, speed, cost, size) such as:RDL(resistor–diode logic),RTL(resistor-transistor logic),DTL(diode–transistor logic),TTL(transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors. The simplest family of logic gates usesbipolar transistors, and is calledresistor–transistor logic(RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in earlyintegrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting indiode–transistor logic(DTL).Transistor–transistor logic(TTL) then supplanted DTL. As integrated circuits became more complex, bipolar transistors were replaced with smallerfield-effect transistors(MOSFETs); seePMOSandNMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now useCMOSlogic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation. Other types of logic gates include, but are not limited to:[27] A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used onbusesof theCPUto allow multiple chips to send data. A group of three-state outputs driving a line with a suitable control circuit is basically equivalent to amultiplexer, which may be physically distributed over separate devices or plug-in cards. In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit. Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as theHarvard Mark I, were built fromrelay logicgates, using electro-mechanicalrelays. Logic gates can be made usingpneumaticdevices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale.[29]Various types of fundamental logic gates have been constructed using molecules (molecular logic gates), which are based on chemical inputs and spectroscopic outputs.[30]Logic gates have been made out ofDNA(seeDNA nanotechnology)[31]and used to create a computer called MAYA (seeMAYA-II). Logic gates can be made fromquantum mechanicaleffects, seequantum logic gate.Photonic logicgates usenonlinear opticaleffects. In principle any method that leads to a gate that isfunctionally complete(for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates).
https://en.wikipedia.org/wiki/Logic_gates
Anand-inverter graph (AIG)is a directed, acyclicgraphthat represents a structural implementation of the logical functionality of acircuit or network. An AIG consists of two-input nodes representinglogical conjunction, terminal nodes labeled with variable names, and edges optionally containing markers indicatinglogical negation. This representation of a logic function is rarely structurally efficient for large circuits, but is an efficient representation for manipulation ofboolean functions. Typically, the abstract graph is represented as adata structurein software. Conversion from the network oflogic gatesto AIGs is fast and scalable. It only requires that every gate be expressed in terms ofAND gatesandinverters. This conversion does not lead to unpredictable increase in memory use and runtime. This makes the AIG an efficient representation in comparison with either thebinary decision diagram(BDD) or the "sum-of-product" (ΣoΠ) form,[citation needed]that is, thecanonical forminBoolean algebraknown as thedisjunctive normal form(DNF). The BDD and DNF may also be viewed as circuits, but they involve formal constraints that deprive them of scalability. For example, ΣoΠs are circuits with at most two levels while BDDs are canonical, that is, they require that input variables be evaluated in the same order on all paths. Circuits composed of simple gates, including AIGs, are an "ancient" research topic. The interest in AIGs started with Alan Turing's seminal 1948 paper[1]on neural networks, in which he described a randomized trainable network of NAND gates. Interest continued through the late 1950s[2]and continued in the 1970s when various local transformations have been developed. These transformations were implemented in several logic synthesis and verification systems, such as Darringer et al.[3]and Smith et al.,[4]which reduce circuits to improve area and delay during synthesis, or to speed upformal equivalence checking. Several important techniques were discovered early atIBM, such as combining and reusing multi-input logic expressions and subexpressions, now known asstructural hashing. Recently there has been a renewed interest in AIGs as afunctional representationfor a variety of tasks in synthesis and verification. That is because representations popular in the 1990s (such as BDDs) have reached their limits of scalability in many of their applications.[citation needed]Another important development was the recent emergence of much more efficientboolean satisfiability(SAT) solvers. When coupled withAIGsas the circuit representation, they lead to remarkable speedups in solving a wide variety ofboolean problems.[citation needed] AIGs found successful use in diverseEDAapplications. A well-tuned combination ofAIGsandboolean satisfiabilitymade an impact onformal verification, including bothmodel checkingand equivalence checking.[5]Another recent work shows that efficient circuit compression techniques can be developed using AIGs.[6]There is a growing understanding that logic and physical synthesis problems can be solved using simulation andboolean satisfiabilityto compute functional properties (such as symmetries)[7]and node flexibilities (such asdon't-care terms,resubstitutions, andSPFDs).[8][9][10]Mishchenko et al. shows that AIGs are a promisingunifyingrepresentation, which can bridgelogic synthesis,technology mapping, physical synthesis, and formal verification. This is, to a large extent, due to the simple and uniform structure of AIGs, which allow rewriting, simulation, mapping, placement, and verification to share the same data structure. In addition to combinational logic, AIGs have also been applied tosequential logicand sequential transformations. Specifically, the method of structural hashing was extended to work for AIGs with memory elements (such asD-type flip-flopswith an initial state, which, in general, can be unknown) resulting in a data structure that is specifically tailored for applications related toretiming.[11] Ongoing research includes implementing a modern logic synthesis system completely based on AIGs. The prototype calledABCfeatures an AIG package, several AIG-based synthesis and equivalence-checking techniques, as well as an experimental implementation of sequential synthesis. One such technique combines technology mapping and retiming in a single optimization step. These optimizations can be implemented using networks composed of arbitrary gates, but the use of AIGs makes them more scalable and easier to implement.
https://en.wikipedia.org/wiki/And-inverter_graph
Inintegrated circuits,depletion-load NMOSis a form of digitallogic familythat uses only a single power supply voltage, unlike earlierNMOS(n-typemetal-oxide semiconductor) logic families that needed multiple power supply voltages. Although manufacturing these integrated circuits required additional processing steps, improved switching speed and the elimination of the extra power supply made this logic family the preferred choice for manymicroprocessorsand other logic elements. Depletion-moden-typeMOSFETsas load transistors allow single voltage operation and achieve greater speed than possible with enhancement-load devices alone. This is partly because the depletion-mode MOSFETs can be a bettercurrent sourceapproximation than the simpler enhancement-mode transistor can, especially when no extra voltage is available (one of the reasons early PMOS and NMOS chips demanded several voltages). The inclusion of depletion-mode NMOS transistors in themanufacturing processdemanded additional manufacturing steps compared to the simpler enhancement-load circuits; this is because depletion-load devices are formed by increasing the amount ofdopantin the load transistors channel region, in order to adjust theirthreshold voltage. This is normally performed usingion implantation. Although theCMOSprocess replaced most NMOS designs during the 1980s, some depletion-load NMOS designs are still produced, typically in parallel with newer CMOS counterparts. One example of this is theZ84015[1]and Z84C15.[2] The original two types of MOSFET logic gates, PMOS andNMOS, were developed by Frosch and Derick in 1957 at Bell Labs.[3]Following this research,AtallaandKahngproposed demonstrated a working MOS device with their Bell Labs team in 1960.[4][5]Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device.[6]However, the NMOS devices were impractical, and only the PMOS type were practical working devices.[7] In 1965,Chih-Tang Sah, Otto Leistiko andA.S. GroveatFairchild Semiconductorfabricated several NMOS devices with channel lengths between8μmand 65μm.[8]Dale L. Critchlow andRobert H. DennardatIBMalso fabricated NMOS devices in the 1960s. The first IBM NMOS product was amemory chipwith 1kbdata and 50–100nsaccess time, which entered large-scale manufacturing in the early 1970s. This led to MOSsemiconductor memoryreplacing earlierbipolarandferrite-core memorytechnologies in the 1970s.[9] In the late 1960s,bipolar junction transistorswere faster than (p-channel) MOS transistors then used and were more reliable, but they also consumed much more power, required more area, and demanded a more complicated manufacturing process. MOS ICs were considered interesting but inadequate for supplanting the fast bipolar circuits in anything but niche markets, such as low power applications. One of the reasons for the low speed was that MOS transistors hadgatesmade ofaluminumwhich led to considerableparasitic capacitancesusing the manufacturing processes of the time. The introduction of transistors with gates ofpolycrystalline silicon(that became thede factostandard from the mid-1970s to early 2000s) was an important first step in order to reduce this handicap. This newself-aligned silicon-gatetransistor was introduced byFederico FagginatFairchild Semiconductorin early 1968; it was a refinement (and the first working implementation) of ideas and work by John C. Sarace, Tom Klein andRobert W. Bower(around 1966–67) for a transistor with lower parasitic capacitances that could be manufactured as part of an IC (and not only as adiscrete component). This new type of pMOS transistor was 3–5 times as fast (per watt) as the aluminum-gate pMOS transistor, and it needed less area, had much lower leakage and higher reliability. The same year, Faggin also built the first IC using the new transistor type, theFairchild 3708(8-bitanalogmultiplexerwithdecoder), which demonstrated a substantially improved performance over its metal-gate counterpart. In less than 10 years, the silicon gate MOS transistor replaced bipolar circuits as the main vehicle for complex digital ICs. There are a couple of drawbacks associated with PMOS: Theelectron holesthat are the charge (current) carriers in PMOS transistors have lower mobility than theelectronsthat are the charge carriers in NMOS transistors (a ratio of approximately 2.5), furthermore PMOS circuits do not interface easily with low voltage positive logic such asDTL-logicandTTL-logic(the 7400-series). However, PMOS transistors are relatively easy to make and were therefore developed first — ionic contamination of the gate oxide frometching chemicalsand other sources can very easily prevent (theelectronbased) NMOS transistors from switching off, while the effect in (theelectron-holebased) PMOS transistors is much less severe. Fabrication of NMOS transistors therefore has to be many times cleaner than bipolar processing in order to produce working devices. Early work on NMOS integrated circuit (IC) technology was presented in a briefIBMpaper atISSCCin 1969.Hewlett-Packardthen started to develop NMOS IC technology to get the promising speed and easy interfacing for its calculator business.[10]Tom Haswell at HP eventually solved many problems by using purer raw materials (especially aluminum for interconnects) and by adding a bias voltage to make thegate thresholdlarge enough; thisback-gate biasremained ade factostandard solution to (mainly)sodiumcontaminants in the gates until the development ofion implantation(see below). Already by 1970, HP was making good enough nMOS ICs and had characterized it enough so that Dave Maitland was able to write an article about nMOS in the December, 1970 issue of Electronics magazine. However, NMOS remained uncommon in the rest of the semiconductor industry until 1973.[11] The production-ready NMOS process enabled HP to develop the industry’s first 4-kbit ICROM.Motorolaeventually served as a second source for these products and so became one of the first commercial semiconductor vendors to master the NMOS process, thanks to Hewlett-Packard. A while later, the startup companyIntelannounced a 1-kbit pMOS DRAM, called1102, developed as a custom product forHoneywell(an attempt to replace magneticcore memoryin theirmainframe computers). HP’s calculator engineers, who wanted a similar but more robust product for the9800 seriescalculators, contributed IC fabrication experience from their 4-kbit ROM project to help improve Intel DRAM’s reliability, operating-voltage, and temperature range. These efforts contributed to the heavily enhancedIntel 11031-kbit pMOS DRAM, which was the world’s first commercially availableDRAMIC. It was formally introduced in October 1970, and became Intel’s first really successful product.[12] Early MOS logic had one transistor type, which isenhancement modeso that it can act as a logic switch. Since suitable resistors were hard to make, the logic gates used saturated loads; that is, to make the one type of transistor act as a load resistor, the transistor had to be turned always on by tying its gate to the power supply (the more negative rail forPMOS logic, or the more positive rail forNMOS logic). Since the current in a device connected that way goes as the square of the voltage across the load, it provides poor pullup speed relative to its power consumption when pulled down. A resistor (with the current simply proportional to the voltage) would be better, and a current source (with the current fixed, independent of voltage) better yet. Adepletion-modedevice with gate tied to the opposite supply rail is a much better load than an enhancement-mode device, acting somewhere between a resistor and a current source. The first depletion-load NMOS circuits were pioneered and made by theDRAMmanufacturerMostek, which made depletion-mode transistors available for the design of the originalZilog Z80in 1975–76.[13]Mostek had theion implantationequipment needed to create adoping profilemore precise than possible withdiffusionmethods, so that thethreshold voltageof the load transistors could be adjusted reliably. At Intel, depletion load was introduced in 1974 by Federico Faggin, an ex-Fairchild engineer and later the founder ofZilog. Depletion-load was first employed for a redesign of one of Intel's most important products at the time, a +5V-only 1Kbit NMOSSRAMcalled the2102(using more than 6000 transistors[14]). The result of this redesign was the significantly faster2102A, where the highest performing versions of the chip had access times of less than 100ns, taking MOS memories close to the speed of bipolar RAMs for the first time.[15] Depletion-load NMOS processes were also used by several other manufacturers to produce many incarnations of popular 8-bit, 16-bit, and 32-bit CPUs. Similarly to early PMOS and NMOS CPU designs usingenhancement modeMOSFETs as loads, depletion-load nMOS designs typically employed various types ofdynamic logic(rather than just static gates) orpass transistorsused as dynamicclocked latches. These techniques can enhance the area-economy considerably although the effect on the speed is complex. Processors built with depletion-load NMOS circuitry include the6800(in later versions[16]), the6502,Signetics 2650,8085,6809,8086,Z8000,NS32016, and many others (whether or not the HMOS processors below are included, as special cases). A large number of support and peripheral ICs were also implemented using (often static) depletion-load based circuitry. However, there were never any standardizedlogic familiesin NMOS, such as thebipolar7400 seriesand theCMOS4000 series, although designs with several second source manufacturers often achieved something of a de facto standard component status. One example of this is the NMOS8255 PIOdesign, originally intended as an 8085 peripheral chip, that has been used in Z80 and x86embedded systemsand many other contexts for several decades. Modern low power versions are available as CMOS or BiCMOS implementations, similar to the 7400-series. Intel's own depletion-load NMOS process was known asHMOS, forHigh density, short channel MOS. The first version was introduced in late 1976 and first used for theirstatic RAMproducts,[17]it was soon being used for faster and/or less power hungry versions of the 8085, 8086, and other chips. HMOS continued to be improved and went through four distinct generations. According to Intel, HMOS II (1979) provided twice the density and four times the speed/power product over other typical contemporary depletion-load NMOS processes.[18]This version was widely licensed by 3rd parties, including (among others)Motorolawho used it for theirMotorola 68000, andCommodore Semiconductor Group, who used it for theirMOS Technology 8502die-shrunkMOS 6502. The original HMOS process, later referred to as HMOS I, had a channel length of 3 microns, which was reduced to 2 for the HMOS II, and 1.5 for HMOS III. By the time HMOS III was introduced in 1982, Intel had begun a switch to theirCHMOSprocess, aCMOSprocess using design elements of the HMOS lines. One final version of the system was released, HMOS-IV. A significant advantage to the HMOS line was that each generation was deliberately designed to allow existing layouts to die-shrink with no major changes. Various techniques were introduced to ensure the systems worked as the layout changed.[19][20] HMOS, HMOS II, HMOS III, and HMOS IV were together used for many different kinds of processors; the8085,8048,8051,8086,80186,80286, and many others, but also for several generations of the same basic design, seedatasheets. In the mid-1980s, faster CMOS variants, using similar HMOS process technology, such as Intel's CHMOS I, II, III, IV, etc. started to supplant n-channel HMOS for applications such as theIntel 80386and certainmicrocontrollers. A few years later, in the late 1980s,BiCMOSwas introduced for high-performance microprocessors as well as for high speedanalog circuits. Today, most digital circuits, including the ubiquitous7400 series, are manufactured using various CMOS processes with a range of different topologies employed. This means that, in order to enhance speed and save die area (transistors and wiring), high speed CMOS designs often employ other elements than just thecomplementarystaticgatesand thetransmission gatesof typical slow low-power CMOS circuits (theonlyCMOS type during the 1960s and 1970s). These methods use significant amounts ofdynamiccircuitry in order to construct the larger building blocks on the chip, such as latches, decoders, multiplexers, and so on, and evolved from the various dynamic methodologies developed for NMOS and PMOS circuits during the 1970s. Compared to static CMOS, all variants of NMOS (and PMOS) are relatively power hungry in steady state. This is because they rely on load transistors working asresistors, where thequiescent currentdetermines the maximum possible load at the output as well as the speed of the gate (i.e. with other factors constant). This contrasts to the power consumption characteristics ofstaticCMOS circuits, which is due only to the transient power draw when the output state is changed and the p- and n-transistors thereby briefly conduct at the same time. However, this is a simplified view, and a more complete picture has to also include the fact that even purely static CMOS circuits have significant leakage in modern tiny geometries, as well as the fact that modern CMOS chips often containdynamicand/ordomino logicwith a certain amount ofpseudo nMOScircuitry.[21] Depletion-load processes differ from their predecessors in the way theVddvoltage source, representing1, connects to each gate. In both technologies, each gate contains one NMOS transistor which is permanently turned on and connected to Vdd. When the transistors connecting to0turn off, thispull-uptransistor determines the output to be1by default. In standard NMOS, the pull-up is the same kind of transistor as is used for logic switches. As the output voltage approaches a value less thanVdd, it gradually switches itself off. This slows the0to1transition, resulting in a slower circuit. Depletion-load processes replace this transistor with a depletion-mode NMOS at a constant gate bias, with the gate tied directly to the source. This alternative type of transistor acts as a current source until the output approaches1, then acts as a resistor. The result is a faster0to1transition. Depletion-load circuits consume less power than enhancement-load circuits at the same speed. In both cases the connection to1is always active, even when the connection to0is also active. This results in high static power consumption. The amount of waste depends on the strength, or physical size, of the pull-up. Both (enhancement-mode) saturated-load and depletion-mode pull-up transistors use greatest power when the output is stable at0, so this loss is considerable. Because the strength of a depletion-mode transistor falls off less on the approach to1, they may reach1faster despite starting slower, i.e. conducting less current at the beginning of the transition and at steady state.
https://en.wikipedia.org/wiki/Depletion-load_NMOS_logic
Intheoretical computer science, acircuitis amodel of computationin which input values proceed through a sequence of gates, each of which computes a function. Circuits of this kind provide a generalization ofBoolean circuitsand a mathematical model for digitallogic circuits. Circuits are defined by the gates they contain and the values the gates can produce. For example, the values in a Boolean circuit areBoolean values, and the circuit includes conjunction, disjunction, and negation gates. The values in an integer circuit are sets of integers and the gates compute set union, set intersection, and set complement, as well as the arithmetic operations addition and multiplication. A circuit is a triplet(M,L,G){\displaystyle (M,L,G)}, where The vertices of the graph are calledgates. For each gateg{\displaystyle g}ofin-degreei{\displaystyle i}, the gateg{\displaystyle g}can be labeled by an elementℓ{\displaystyle \ell }ofL{\displaystyle L}if and only ifℓ{\displaystyle \ell }is defined onMi.{\displaystyle M^{i}.} The gates of in-degree 0 are calledinputsorleaves. The gates of out-degree 0 are calledoutputs. If there is an edge from gateg{\displaystyle g}to gateh{\displaystyle h}in the graphG{\displaystyle G}thenh{\displaystyle h}is called achildofg{\displaystyle g}. We suppose there is an order on the vertices of the graph, so we can speak of thek{\displaystyle k}th child of a gate whenk{\displaystyle k}is less than or equal to the out-degree of the gate. Thesizeof a circuit is the number of nodes of a circuit. Thedepth of a gateg{\displaystyle g}is the length of thelongest pathinG{\displaystyle G}beginning atg{\displaystyle g}up to an output gate. In particular, the gates of out-degree 0 are the only gates of depth 1. Thedepth of a circuitis the maximum depth of any gate. Leveli{\displaystyle i}is the set of all gates of depthi{\displaystyle i}. Alevelled circuitis a circuit in which the edges to gates of depthi{\displaystyle i}comes only from gates of depthi+1{\displaystyle i+1}or from the inputs. In other words, edges only exist between adjacent levels of the circuit. Thewidthof a levelled circuit is the maximum size of any level. The exact valueV(g){\displaystyle V(g)}of a gateg{\displaystyle g}with in-degreei{\displaystyle i}and labell{\displaystyle l}is defined recursively for all gatesg{\displaystyle g}. where eachgj{\displaystyle g_{j}}is a parent ofg{\displaystyle g}. The value of the circuit is the value of each of the output gates. The labels of the leaves can also be variables which take values inM{\displaystyle M}. If there aren{\displaystyle n}leaves, then the circuit can be seen as a function fromMn{\displaystyle M^{n}}toM{\displaystyle M}. It is then usual to consider a family of circuits(Cn)n∈N{\displaystyle (C_{n})_{n\in \mathbb {N} }}, a sequence of circuits indexed by the integers where the circuitCn{\displaystyle C_{n}}hasn{\displaystyle n}variables. Families of circuits can thus be seen as functions fromM∗{\displaystyle M^{*}}toM{\displaystyle M}. The notions of size, depth and width can be naturally extended to families of functions, becoming functions fromN{\displaystyle \mathbb {N} }toN{\displaystyle \mathbb {N} }; for example,size(n){\displaystyle size(n)}is the size of then{\displaystyle n}th circuit of the family. Computing the output of a givenBoolean circuiton a specific input is aP-completeproblem. If the input is aninteger circuit, however, it is unknown whether this problem isdecidable. Circuit complexityattempts to classifyBoolean functionswith respect to the size or depth of circuits that can compute them.
https://en.wikipedia.org/wiki/Digital_circuit
Anelectronic symbolis apictogramused to represent variouselectricalandelectronicdevices or functions, such aswires,batteries,resistors, andtransistors, in aschematic diagramof an electrical orelectronic circuit. These symbols are largely standardized internationally today, but may vary from country to country, or engineering discipline, based on traditional conventions. The graphic symbols used for electrical components incircuit diagramsare covered by national and international standards, in particular: The standards do not all agree, and use of unusual (even if standardized) symbols can lead to confusion and errors.[2]Symbols usage is sometimes idiosyncratic to engineering disciplines, and national or local variations to international standards exist. For example, lighting and power symbols used as part of architectural drawings may be different from symbols for devices used in electronics. Symbols shown are typical examples, not a complete list.[3][4] The shorthand for ground is GND. Optionally, the triangle in the middle symbol may be filled in. Voltage text should be placed next to each battery symbol, such as "3V". It is very common forpotentiometerandrheostatsymbols to be used for many types of variable resistors andtrimmers. Optionally, the triangle in these symbols may be filled in, or a line may be drawn through the triangle (less desirable). The words anode and cathode aren't part of the diode symbols. For instructional purposes, sometimes two letters (A/C or A/K) are placed next to diode symbols similar to how the letters C/B/E or D/G/S are placed next totransistorsymbols. "K" is often used instead of "C", because the origin of the word cathode iskathodos, and to avoid confusion with "C" forcapacitorsin silkscreen ofprinted circuit boards. Voltage text should be placed next to each zener and TVS diode symbol, such as "5.1V". There are many ways to draw a single-phase bridge rectifier symbol. Some simplified symbols don't show the internal diodes. An inductor can be drawn either as a series of loops, or series of half-circles. Voltage text should be placed on both sides of power transformers, such as 120V (input side) and 6.3V (output side). Optionally, transistor symbols may include a circle.[6]Note: The pin letters B/C/E and G/D/S aren't part of the transistor symbols. For multiple pole switches, a dotted or dashed line can be included to indicate two or more switch at the same time (see DPST and DPDT examples below). Relays symbols are a combination of an inductor symbol and switch symbol. Note: The pin letters in these symbols aren't part of the standard relay symbol. LED is located in the diode section. TVS and Zener diodes are located in the diode section. Speaker symbols sometimes include an internal inductor symbol. Impedance text should be placed next to each speaker symbol, such as "8 ohms". There are numerous connector symbol variations. For the symbols below: A and B are inputs, Q is output. Note: These letters are not part of the symbols. There are variations of these logic gate symbols. Depending on the IC, the two-input gates below may have: 1) two or more inputs; 2) infrequently some have a second invertedQoutput too. The above logic symbols may have additional I/O variations too: 1)schmitt triggerinputs, 2)tri-stateoutputs, 3)open-collectoror open-drain outputs (not shown). For the symbols below: Q is output,Qis inverted output, E is enable input, internal triangle shape is clock input, S is Set, R is Reset (some datasheets use clear (CLR) instead of reset along the bottom). There are variations of these flip-flop symbols. Depending on the IC, a flip-flop may have: 1) one or both outputs (Q only,Qonly, both Q &Q); 2) one or both forced inputs along top & bottom (R only, S only, both R & S); 3) some inputs may be inverted. Note: The outside text isn't part of these symbols. Frequency text should be placed next to each oscillator symbol, such as "16MHz". The shape of some electronic symbols have changed over time. The following historical electronic symbols can be found in old electronic books / magazines / schematics, and now considered obsolete. All of the following are obsolete capacitor symbols.
https://en.wikipedia.org/wiki/Electronic_symbol
TheESPRESSO logic minimizeris a computer program usingheuristicand specificalgorithmsfor efficiently reducing thecomplexityof digitallogic gatecircuits.[1]ESPRESSO-I was originally developed atIBMby Robert K. Brayton et al. in 1982.[2][3]and improved as ESPRESSO-II in 1984.[4][5]Richard L. Rudell later published the variant ESPRESSO-MV in 1986[6]and ESPRESSO-EXACT in 1987.[7][8][5]Espresso has inspired many derivatives. Electronic devices are composed of numerous blocks of digital circuits, the combination of which performs some required task. The efficient implementation oflogic functionsin the form oflogic gatecircuits (such that no more logic gates are used than are necessary) is necessary to minimize production costs, and/or maximize a device's performance. All digital systems are composed of two elementary functions:memory elementsfor storing information, andcombinational circuitsthat transform that information.State machines, like counters, are a combination of memory elements andcombinational logiccircuits. Since memory elements are standard logic circuits they are selected out of a limited set of alternative circuits; so designing digital functions comes down to designing the combinational gate circuits and interconnecting them. In general the instantiation of logic circuits from high-level abstraction is referred to aslogic synthesis, which can be carried out by hand, but usually some formal method by computer is applied. In this article the design methods for combinational logic circuits are briefly summarized. The starting point for the design of a digital logic circuit is its desired functionality, having derived from the analysis of the system as a whole, the logic circuit is to make part of. The description can be stated in some algorithmic form or by logic equations, but may be summarized in the form of a table as well. The below example shows a part of such a table for a7-segment displaydriver that translates the binary code for the values of a decimal digit into the signals that cause the respective segments of the display to light up. The implementation process starts with alogic minimizationphase, to be described below, in order to simplify the function table by combining the separate terms into larger ones containing fewer variables. Next, the minimized result may be split up in smaller parts by a factorization procedure and is eventually mapped onto the available basic logic cells of the target technology. This operation is commonly referred to aslogic optimization.[9] MinimizingBoolean functionsby hand using the classicalKarnaugh mapsis a laborious, tedious, and error-prone process. It isn't suited for more than six input variables and practical only for up to four variables, while product term sharing for multiple output functions is even harder to carry out.[10]Moreover, this method doesn't lend itself to be automated in the form of a computer program. However, since modern logic functions are generally not constrained to such a small number of variables, while the cost as well as the risk of making errors is prohibitive for manual implementation of logic functions, the use of computers became indispensable. The first alternative method to become popular was the tabular method developed byWillard QuineandEdward McCluskey. Starting with thetruth tablefor a set of logic functions, by combining themintermsfor which the functions are active (the ON-cover) or for which the function value is irrelevant (theDon't-Care-cover or DC-cover), a set ofprime implicantsis composed. Finally, a systematic procedure is followed to find the smallest set of prime implicants the output functions can be realised with.[11][12] Although thisQuine–McCluskey algorithmis very well suited to be implemented in a computer program, the result is still far from efficient in terms of processing time and memory usage. Adding a variable to the function will roughly double both of them, because the truth table length increasesexponentiallywith the number of variables. A similar problem occurs when increasing the number of output functions of a combinational function block. As a result, the Quine–McCluskey method is practical only for functions with a limited number of input variables and output functions. A different approach to this issue is followed in the ESPRESSO algorithm, developed by Brayton et al. at theUniversity of California, Berkeley.[4][3]It is a resource and performance efficient algorithm aimed at solving the heuristichazard-free two-level logic minimization problem.[13] Rather than expanding a logic function into minterms, the program manipulates "cubes", representing the product terms in the ON-, DC-, and OFF- covers iteratively. Although the minimization result is not guaranteed to be theglobal minimum, in practice this is very closely approximated, while the solution is always free fromredundancy. Compared to the other methods, this one is essentially more efficient, reducing memory usage and computation time by several orders of magnitude. Its name reflects the way of instantly making a cup of fresh coffee.[further explanation needed]There is hardly any restriction to the number of variables, output functions and product terms of a combinational function block. In general, e.g. tens of variables with tens of output functions are readily dealt with. The input for ESPRESSO is a function table of the desired functionality; the result is a minimized table, describing either the ON-cover or the OFF-cover of the function, depending on the selected options. By default, the product terms will be shared as much as possible by the several output functions, but the program can be instructed to handle each of the output functions separately. This allows for efficient implementation in two-level logic arrays such as aPLA(Programmable Logic Array) or aPAL(Programmable Array Logic). The ESPRESSO algorithm proved so successful that it has been incorporated as a standard logic function minimization step into virtually any contemporarylogic synthesistool. For implementing a function in multi-level logic, the minimization result is optimized by factorization and mapped onto the available basic logic cells in the target technology, whether this concerns afield-programmable gate array(FPGA) or anapplication-specific integrated circuit(ASIC). The originalESPRESSOprogram is available asCsource code from theUniversity of California, Berkeleywebsite. The last release was version 2.3 dated 1988.[14]TheESPRESSO-ABandEQNTOTT(equation to truth table) program, an updated version of ESPRESSO for modernPOSIXsystems, is available inDebianLinux distribution(.deb) file format as well the C source code. The last release was version 9.0 dated 2008.[15]A Windows and C++20 compatible was ported to GitHub in 2020.[16] Logic Fridayis a freeWindowsprogram that provides a graphical interface to Espresso, as well as to misII, another module in the Berkeley Octtools package. With Logic Friday users can enter a logic function as a truth table, equation, or gate diagram, minimize the function, and then view the results in both of the other two representations. The last release was version 1.1.4 dated 2012.[17] Minilogis a free Windows program that provides logic minimization exploiting this Espresso algorithm. It is able to generate a two-level gate implementation for a combinational function block with up to 40 inputs and outputs or asynchronous state machinewith up to 256 states. It is part of thePublicadeducational design package. ESPRESSO-IISOJSis a JavaScript implementation of ESPRESSO-II for single output functions. It employsunit propagationas an additional optimization technique for the various algorithms in ESPRESSO-II that are based on the unate recursive paradigm. Another addition is allowing control over when literals can be raised which can be exploited to effectively minimizeKleene logicfunctions.[18]
https://en.wikipedia.org/wiki/Espresso_heuristic_logic_minimizer
In electronics,emitter-coupled logic(ECL) is a high-speedintegrated circuitbipolar transistorlogic family. ECL uses abipolar junction transistor(BJT)differential amplifierwith single-ended input and limited emitter current to avoid thesaturated(fully on) region of operation and the resulting slow turn-off behavior.[4]As the current is steered between two legs of an emitter-coupled pair, ECL is sometimes calledcurrent-steering logic(CSL),[5]current-mode logic(CML)[6]orcurrent-switch emitter-follower(CSEF) logic.[7] In ECL, the transistors are never in saturation, the input and output voltages have a small swing (0.8 V), the input impedance is high and the output impedance is low. As a result, the transistors change states quickly,gate delaysare low, and thefanoutcapability is high.[8]In addition, the essentially constant current draw of the differential amplifiers minimizes delays andglitchesdue to supply-line inductance and capacitance, and the complementary outputs decrease the propagation time of the whole circuit by reducing inverter count. ECL's major disadvantage is that each gate continuously draws current, which means that it requires (and dissipates) significantly more power than those of other logic families, especially when quiescent. The equivalent of emitter-coupled logic made fromFETsis calledsource-coupled logic(SCFL).[9] A variation of ECL in which all signal paths and gate inputs are differential is known as differential current switch (DCS) logic.[10] ECL was invented in August 1956 atIBMby Hannon S. Yourke.[12][13]Originally calledcurrent-steering logic, it was used in theStretch,IBM 7090, andIBM 7094computers.[11]The logic was also called a current-mode circuit.[14]It was also used to make the IBM AdvancedSolid Logic Technology(ASLT) circuits in the IBM 360/91.[15][16][17] Yourke's current switch was a differential amplifier whose input logic levels were different from the output logic levels. "In current mode operation, however, the output signal consists of voltage levels which vary about a reference level different from the input reference level."[18]In Yourke's design, the two logic reference levels differed by 3 volts. Consequently, two complementary versions were used: an NPN version and a PNP version. The NPN output could drive PNP inputs, and vice versa. "The disadvantages are that more different power supply voltages are needed, and both pnp and npn transistors are required."[11] Instead of alternating NPN and PNP stages, another coupling method employedZener diodesand resistors to shift the output logic levels to be the same as the input logic levels.[19] Beginning in the early 1960s, ECL circuits were implemented onmonolithic integrated circuits. They consisted of a differential-amplifier input stage to perform logic followed by an emitter-follower stage to drive outputs and shift the output voltages so they will be compatible with the inputs. The emitter-follower output stages could also be used to performwired-or logic. Motorolaintroduced their first digital monolithic integrated circuit line, MECL I, in 1962.[20]Motorola developed several improved series, with MECL II in 1966, MECL III in 1968 with 1-nanosecond gate propagation time and 300 MHz flip-flop toggle rates, and the 10,000 series (with lower power consumption and controlled edge speeds) in 1971.[21]The MECL 10H family was introduced in 1981.[2]Fairchild introduced the F100K family in 1975.[22][23] The ECLinPS ("ECL in picoseconds") family was introduced in 1987.[24]ECLinPS has 500 ps single-gate delay and 1.1 GHz flip-flop toggle frequency.[25]The ECLinPS family parts are available from multiple sources, including Arizona Microtek, Micrel (subsequently acquired by Microchip Technology Inc.), National Semiconductor, and ON Semiconductor.[26] The high power consumption of ECL meant that it has been used mainly when high speed is a vital requirement. Older high-end mainframe computers, such as theEnterprise System/9000members of IBM'sESA/390computer family, used ECL,[27]as did theCray-1,[28]and first-generationAmdahlmainframes. (Current IBM mainframes useCMOS.[29]) Beginning in 1975,Digital Equipment Corporation's highest performance processors were all based on multi-chip ECL CPUs—from the ECLKL10through the ECLVAX 8000and finally theVAX 9000. By 1991, the CMOSNVAXwas launched, which offered comparable performance to the VAX 9000 despite costing 1/25 as much and consuming considerably less power.[30]TheMIPS R6000computers also used ECL. Some of these computer designs used ECLgate arrays. ECL is based on an emitter-coupled (long-tailed) pair, shaded red in the figure on the right. The left half of the pair (shaded yellow) consists of two parallel-connected input transistors T1 and T2 (an exemplary two-input gate is considered) implementing NOR logic. The base voltage of the right transistor T3 is held fixed by a reference voltage source, shaded light green: the voltage divider with a diode thermal compensation (R1, R2, D1 and D2) and sometimes a buffering emitter follower (not shown on the picture); thus the emitter voltages are kept relatively steady. As a result, the common emitter resistor REacts nearly as acurrent source. The output voltages at the collector load resistors RC1and RC3are shifted and buffered to the inverting and non-inverting outputs by the emitter followers T4 and T5 (shaded blue). The output emitter resistors RE4and RE5do not exist in all versions of ECL. In some cases 50 Ω line termination resistors connected between the bases of the input transistors and −2 V of a driven gate act as emitter resistors of the driving gate.[31] The ECL circuit operation is considered below with assumption that the input voltage is applied to T1 base, while T2 input is unused or a logical "0" is applied. During the transition, the core of the circuit – the emitter-coupled pair (T1 and T3) – acts as a differential amplifier with single-ended input. The long-tail current source (RE) sets the total current flowing through the two legs of the pair. The input voltage controls the current flowing through the transistors by sharing it between the two legs, steering it all to one side when not near the switching point. The gain is higher than at the end states (see below) and the circuit switches quickly. At low input voltage(logical "0") orat high input voltage(logical "1") the differential amplifier is overdriven. The transistor (T1 or T3) is cutoff and the other (T3 or T1) is in active linear region acting as acommon-emitter stage with emitter degenerationthat takes all the current, starving the other cutoff transistor.The active transistor is loaded with the relatively high emitter resistanceREthat introduces a significant negative feedback (emitter degeneration). To prevent saturation of the active transistor so that the diffusion time that slows the recovery from saturation will not be involved in the logic delay,[4]the emitter and collector resistances are chosen such that at maximum input voltage some voltage is left across the transistor. The residual gain is low (K=RC/RE< 1). The circuit is insensitive to the input voltage variations and the transistor stays firmly in active linear region. The input resistance is high because of the series negative feedback.The cutoff transistor breaks the connection between its input and output. As a result, its input voltage does not affect the output voltage. The input resistance is high again since the base-emitter junction is cutoff. Other noteworthy characteristics of the ECL family include the fact that the large current requirement is approximately constant, and does not depend significantly on the state of the circuit. This means that ECL circuits generate relatively little power noise, unlike other logic types which draw more current when switching than quiescent. In cryptographic applications, ECL circuits are also less susceptible toside channel attackssuch asdifferential power analysis.[citation needed] Thepropagation timefor this arrangement can be less than a nanosecond, including the signal delay getting on and off the IC package. Some type of ECL has always been the fastest logic family.[32][33] Radiation hardening: While normal commercial-grade chips can withstand 100gray(10 krad), many ECL devices are operational after 100,000 gray (10 Mrad).[34] ECL circuits usually operate with negative power supplies (positive end of the supply is connected to ground).[2]: 5(Other logic families ground the negative end of the power supply.) This is done mainly to minimize the influence of the power supply variations on the logic levels. ECL is more sensitive to noise on the VCCand is relatively immune to noise on VEE.[35]Because ground should be the most stable voltage in a system, ECL is specified with a positive ground. In this connection, when the supply voltage varies, the voltage drops across the collector resistors change slightly (in the case of emitter constant current source, they do not change at all). As the collector resistors are firmly "tied up" to ground, the output voltages "move" slightly (or not at all). If the negative end of the power supply was grounded, the collector resistors would be attached to the positive rail.[2]: 5As the constant voltage drops across the collector resistors change slightly (or not at all), the output voltages follow the supply voltage variations and the two circuit parts act as constant current level shifters. In this case, the voltage divider R1-R2 compensates the voltage variations to some extent. The positive power supply has another disadvantage — the output voltages will vary slightly (±0.4 V) against the background of high constant voltage (+3.9 V). Another reason for using a negative power supply is protection of the output transistors from an accidental short circuit developing between output and ground[36](but the outputs are not protected from a short circuit with the negative rail). The value of the supply voltage is chosen so that sufficient current flows through the compensating diodes D1 and D2 and the voltage drop across the common emitter resistor REis adequate. ECL circuits available on the open market usually operated with logic levels incompatible with other families. This meant that interoperation between ECL and other logic families, such as the popularTTLfamily, required additional interface circuits. The fact that the high and low logic levels are relatively close meant that ECL suffers from small noise margins, which can be troublesome. At least one manufacturer,IBM, made ECL circuits for use in the manufacturer's own products. The power supplies were substantially different from those used in the open market.[27] Positive emitter-coupled logic, also calledpseudo-ECL, (PECL) is a further development of ECL using a positive 5 V supply instead of a negative 5.2 V supply.[38]Low-voltage positive emitter-coupled logic (LVPECL) is a power-optimized version of PECL, using a positive 3.3 V instead of 5 V supply. PECL and LVPECL are differential-signaling systems and are mainly used in high-speed and clock-distribution circuits. A common misconception is that PECL devices are slightly different from ECL devices. In fact, every ECL device is also a PECL device.[39]
https://en.wikipedia.org/wiki/Emitter-coupled_logic
Indigital electronics, thefan-outis the number of gate inputs driven by the output of another single logic gate. In most designs, logic gates are connected to form more complex circuits. While no logic gate input can be fed by more than one output at a time without causing contention, it is common for one output to be connected to several inputs. The technology used to implement logic gates usually allows a certain number of gate inputs to be wired directly together without additional interfacing circuitry. Themaximum fan-outof an output measures its load-driving capability: it is the greatest number of inputs of gates of the same type to which the output can be safely connected. Maximum limits on fan-out are usually stated for a given logic family or device in the manufacturer's datasheets. These limits assume that the driven devices are members of the same family. More complex analysis than fan-in and fan-out is required when two different logic families are interconnected. Fan-out is ultimately determined by the maximum source and sink currents of an output and the maximum source and sink currents of the connected inputs; the driving device must be able to supply or sink at its output the sum of the currents needed or provided (depending on whether the output is a logic high or low voltage level) by all of the connected inputs, while maintaining the output voltage specifications. For each logic family, typically a "standard" input is defined by the manufacturer with maximum input currents at each logic level, and the fan-out for an output is computed as the number of these standard inputs that can be driven in the worst case. (Therefore, it is possible that an output can actually drive more inputs than specified by fan-out, even of devices within the same family, if the particular devices being driven sink and/or source less current, as reported on their data sheets, than a "standard" device of that family.) Ultimately, whether a device has the fan-out capability to drive (with guaranteed reliability) a set of inputs is determined by adding up all the input-low (max.) source currents specified on the datasheets of the driven devices, adding up all the input-high (max.) sink currents of those same devices, and comparing those sums to the driving device's guaranteed maximum output-low sink current and output-high source current specifications, respectively. If both totals are within the driving device's limits, then it has the DC fan-out capacity to drive those inputs on those devices as a group, and otherwise it doesn't, regardless of the manufacturer's given fan-out number. However, for any reputable manufacturer, if this current analysis reveals that the device cannot drive the inputs, the fan-out number will agree. When high-speed signal switching is required, the AC impedance of the output, the inputs, and the conductors between may significantly reduce the effective drive capacity of output, and this DC analysis may not be enough. SeeAC Fan-outbelow. A perfect logic gate would have infiniteinput impedanceand zerooutput impedance, allowing a gate output to drive any number of gate inputs. However, since real-world fabrication technologies exhibit less than perfect characteristics, a limit will be reached where a gate output cannot drive any morecurrentinto subsequent gate inputs - attempting to do so causes thevoltageto fall below the level defined for the logic level on that wire, causing errors. The fan-out is the number of inputs that can be connected to an output before the current required by the inputs exceeds the current that can be delivered by the output while still maintaining correct logic levels. The current figures may be different for the logic zero and logic one states and in that case we must take the pair that give the lower fan-out. This can be expressed mathematically as where⌊⌋{\displaystyle \lfloor \;\rfloor }is thefloor function. Going on these figures aloneTTLlogic gates are limited to perhaps 2 to 10, depending on the type of gate, whileCMOSgates have DC fan-outs that are generally far higher than is likely to occur in practical circuits (e.g. usingNXP Semiconductor specifications for their HEF4000 series CMOS chipsat 25 °C and 15 V gives a fan-out of 34 000). However, inputs of real gates have capacitance as well as resistance to thepower supply rails. This capacitance will slow the output transition of the previous gate and hence increase itspropagation delay. As a result, rather than a fixed fan-out the designer is faced with a trade off between fan-out and propagation delay (which affects the maximum speed of the overall system). This effect is less marked for TTL systems, which is one reason why TTL maintained a speed advantage over CMOS for many years. Often a single signal (as an extreme example, the clock signal) needs to drive far more than 10 things on a chip. Rather than simply wiring the output of a gate to 1000 different inputs, circuit designers have found that it runs much faster to have a tree (as an extreme example, aclock tree) – for example, have the output of that gate drive 10 buffers (or equivalently a buffer scaled 10 times as big as the minimum-size buffer), those buffers drive 100 other buffers (or equivalently a buffer scaled 100 times as big as the minimum-size buffer), and those final buffers to drive the 1000 desired inputs. Duringphysical design, some VLSI design tools do buffer insertion as part ofsignal integritydesign closure. Likewise, rather than simply wiring all 64 output bits to a single 64-input NOR gate to generate theZ flagon a 64-bit ALU, circuit designers have found that it runs much faster to have a tree – for example, have the Z flag generated by an 8-input NOR gate, and each of their inputs generated by an 8-input OR gate. Reminiscent ofradix economy, one estimate for the total delay of such a tree—the total number of stages by the delay of each stage – gives an optimum (minimum delay) when each stage of the tree is scaled bye, approximately 2.7. People who design digital integrated circuits typically insert trees whenever necessary such that the fan-in and fan-out of each and every gate on the chip is between 2 and 10.[1] Dynamic or AC fan-out, not DC fan-out is therefore the primary limiting factor in many practical cases, due to the speed limitation. For example, suppose a microcontroller has 3 devices on its address and data lines, and the microcontroller can drive 35 pF of bus capacitance at its maximum clock speed. If each device has 8 pF of input capacitance, then only 11 pF of trace capacitance is allowable. (Routing traces on printed circuit boards usually have 1-2 pF per inch so the traces in this case can be 5.5 inches long max.) If this trace length condition can't be met, then the microcontroller must be run at a slower bus speed for reliable operation, or a buffer chip with higher current drive must be inserted into the circuit. Higher current drive increases speed sinceI=CdVdt{\displaystyle \textstyle \ I=C{\frac {dV}{dt}}}; more simply, current is rate of flow of charge, so increased current charges the capacitance faster, and the voltage across a capacitor is equal to the charge on it divided by the capacitance. So with more current, voltage changes faster, which allows faster signaling over the bus. Unfortunately, due to the higher speeds of modern devices,IBISsimulations may be required for exact determination of the dynamic fan-out since dynamic fan-out is not clearly defined in most datasheets. (See the external link for more information.)
https://en.wikipedia.org/wiki/Fan-out
Afield-programmable gate array(FPGA) is a type of configurableintegrated circuitthat can be repeatedly programmed after manufacturing. FPGAs are a subset of logic devices referred to asprogrammable logic devices(PLDs). They consist of an array ofprogrammablelogic blockswith a connecting grid, that can be configured "in the field" to interconnect with other logic blocks to perform various digital functions. FPGAs are often used in limited (low) quantity production of custom-made products, and in research and development, where the higher cost of individual FPGAs is not as important, and where creating and manufacturing a custom circuit would not be feasible. Other applications for FPGAs include the telecommunications, automotive, aerospace, and industrial sectors, which benefit from their flexibility, high signal processing speed, and parallel processing abilities. A FPGA configuration is generally written using ahardware description language(HDL) e.g.VHDL, similar to the ones used forapplication-specific integrated circuits(ASICs).Circuit diagramswere formerly used to write the configuration. The logic blocks of an FPGA can be configured to perform complexcombinational functions, or act as simplelogic gateslikeANDandXOR. In most FPGAs, logic blocks also includememory elements, which may be simpleflip-flopsor more sophisticated blocks of memory.[1]Many FPGAs can be reprogrammed to implement differentlogic functions, allowing flexiblereconfigurable computingas performed incomputer software. FPGAs also have a role inembedded systemdevelopment due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture.[2] FPGAs are also commonly used during the development of ASICs to speed up the simulation process. The FPGA industry sprouted fromprogrammable read-only memory(PROM) andprogrammable logic devices(PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable).[3] Alterawas founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on thedieto erase theEPROMcells that held the device configuration.[4] Xilinxproduced the first commercially viable field-programmablegate arrayin 1985[3]– the XC2064.[5]The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market.[6]The XC2064 had 64 configurable logic blocks (CLBs), with two three-inputlookup tables(LUTs).[7] In 1987, theNaval Surface Warfare Centerfunded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.[3] Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (laterMicrosemi, nowMicrochip) was serving about 18 percent of the market.[6] The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used intelecommunicationsandnetworking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.[8] By 2013, Altera (31 percent), Xilinx (36 percent) and Actel (10 percent) together represented approximately 77 percent of the FPGA market.[9] Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like thedata centersthat operate theirBing search engine), due to theperformance per wattadvantage FPGAs deliver.[10]Microsoft began using FPGAs toaccelerateBing in 2014, and in 2018 began deploying FPGAs across other data center workloads for theirAzurecloud computingplatform.[11] The following timelines indicate progress in different aspects of FPGA design. Adesign startis a new custom design for implementation on an FPGA. Contemporary FPGAs have amplelogic gatesand RAM blocks to implement complex digital computations. FPGAs can be used to implement any logical function that anASICcan perform. The ability to update the functionality after shipping,partial re-configurationof a portion of the design[18]and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.[1] As FPGA designs employ very fast I/O rates and bidirectional databuses, it becomes a challenge to verify correct timing of valid data within setup time and hold time.[19]Floor planninghelps resource allocation within FPGAs to meet these timing constraints. Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmableslew rateon each output pin. This allows the user to set low rates on lightly loaded pins that would otherwiseringorcoupleunacceptably, and to set higher rates on heavily loaded high-speed channels that would otherwise run too slowly.[20][21]Also common are quartz-crystal oscillatordriver circuitry, on-chipRC oscillators, andphase-locked loopswith embeddedvoltage-controlled oscillatorsused for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differentialcomparatorson input pins designed to be connected todifferential signalingchannels. A fewmixed signalFPGAs have integrated peripheralanalog-to-digital converters(ADCs) anddigital-to-analog converters(DACs) with analog signal conditioning blocks, allowing them to operate as asystem on a chip(SoC).[22]Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, andfield-programmable analog array(FPAA), which carries analog values on its internal programmable interconnect fabric. The most common FPGA architecture consists of an array oflogic blockscalled configurable logic blocks (CLBs) or logic array blocks (LABs) (depending on vendor),I/O pads, and routing channels.[1]Generally, all the routing channels have the same width (number of signals). Multiple I/O pads may fit into the height of one row or the width of one column in the array. "An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing channels needed may vary considerably even among designs with the same amount of logic. For example, acrossbar switchrequires much more routing than asystolic arraywith the same gate count. Since unused routing channels increase the cost (and decrease the performance) of the FPGA without providing any benefit, FPGA manufacturers try to provide just enough channels so that most designs that will fit in terms oflookup tables(LUTs) and I/Os can berouted. This is determined by estimates such as those derived fromRent's ruleor by experiments with existing designs."[23] In general, a logic block consists of a few logical cells. A typical cell consists of a 4-input LUT, afull adder(FA) and aD-type flip-flop. The LUT might be split into two 3-input LUTs. Innormal modethose are combined into a 4-input LUT through the firstmultiplexer(mux). Inarithmeticmode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be eithersynchronousorasynchronous, depending on the programming of the third mux. In practice, the entire adder or parts of it arestored as functionsinto the LUTs in order to savespace.[24][25][26] Modern FPGA families expand upon the above capabilities to include higher-level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased performance compared to building them from logical primitives. Examples of these includemultipliers, genericDSP blocks,embedded processors, high-speed I/O logic and embeddedmemories. Higher-end FPGAs can contain high-speedmulti-gigabit transceiversandhard IP coressuch asprocessor cores,Ethernetmedium access control units,PCIorPCI Expresscontrollers, and externalmemory controllers. These cores exist alongside the programmable fabric, but they are built out oftransistorsinstead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high-performancesignal conditioningcircuitry along with high-speed serializers and deserializers, components that cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such asline codingmay or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA. An alternate approach to using hard macro processors is to make use ofsoft processorIP coresthat are implemented within the FPGA logic.Nios II,MicroBlazeandMico32are examples of popular softcore processors. Many modern FPGAs are programmed atrun time, which has led to the idea ofreconfigurable computingor reconfigurable systems –CPUsthat reconfigure themselves to suit the task at hand. Additionally, new non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip. In 2012 the coarse-grained architectural approach was taken a step further by combining thelogic blocksand interconnects of traditional FPGAs with embeddedmicroprocessorsand related peripherals to form a completesystem on a programmable chip. Examples of such hybrid technologies can be found in theXilinxZynq-7000 allProgrammable SoC,[27]which includes a 1.0GHzdual-coreARM Cortex-A9MPCore processorembeddedwithin the FPGA's logic fabric,[28]or in theAlteraArria V FPGA, which includes an 800 MHzdual-coreARM Cortex-A9MPCore. TheAtmelFPSLIC is another such device, which uses anAVRprocessor in combination with Atmel's programmable logic architecture. TheMicrosemiSmartFusiondevices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB offlashand 64 kB of RAM) and analogperipheralssuch as a multi-channelanalog-to-digital convertersanddigital-to-analog convertersin theirflash memory-based FPGA fabric.[citation needed] Most of the logic inside of an FPGA issynchronous circuitrythat requires aclock signal. FPGAs contain dedicated global and regional routing networks for clock and reset, typically implemented as anH tree, so they can be delivered with minimalskew. FPGAs may contain analogphase-locked loopordelay-locked loopcomponents to synthesize newclock frequenciesand managejitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separateclock domains. These clock signals can be generated locally by an oscillator or they can be recovered from adata stream. Care must be taken when buildingclock domain crossingcircuitry to avoidmetastability. Some FPGAs containdual port RAMblocks that are capable of working with different clocks, aiding in the construction of buildingFIFOsand dual port buffers that bridge clock domains. To shrink the size and power consumption of FPGAs, vendors such asTabulaandXilinxhave introduced3D or stacked architectures.[29][30]Following the introduction of its28 nm7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies. Xilinx's approach stacks several (three or four) active FPGA dies side by side on a siliconinterposer– a single piece of silicon that carries passive interconnect.[30][31]The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called aheterogeneousFPGA.[32] Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other dies and technologies to the FPGA using Intel's embedded multi_die interconnect bridge (EMIB) technology.[33] To define the behavior of the FPGA, the user provides a design in ahardware description language(HDL) or as aschematicdesign. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and itscomponent modules. Using anelectronic design automationtool, a technology-mappednetlistis generated. The netlist can then be fit to the actual FPGA architecture using a process calledplace and route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the results usingtiming analysis,simulation, and otherverification and validationtechniques. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA via aserial interface(JTAG) or to an external memory device such as anEEPROM. The most common HDLs areVHDLandVerilog.National Instruments'LabVIEWgraphical programming language (sometimes referred to asG) has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog has a C-like syntax, unlike VHDL.[34][self-published source?] To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly calledintellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such asOpenCores(typically released underfree and open sourcelicenses such as theGPL,BSDor similar license). Such designs are known asopen-source hardware. In a typicaldesign flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially theRTLdescription inVHDLorVerilogis simulated by creatingtest benchesto simulate the system and observe results. Then, after thesynthesisengine has mapped the design to a netlist, the netlist is translated to agate-leveldescription where simulation is repeated to confirm the synthesis proceeded without errors. Finally, the design is laid out in the FPGA at which pointpropagation delayvalues can beback-annotatedonto the netlist, and the simulation can be run again with these values. More recently,OpenCL(Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in theC programming language.[35]For further information, seehigh-level synthesisandC to HDL. Most FPGAs rely on anSRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example,flash memoryorEEPROMdevices may load contents into internal SRAM that controls routing and logic. The SRAM approach is based onCMOS. Rarer alternatives to the SRAM approach include: In 2016, long-time industry rivalsXilinx(now part ofAMD) andAltera(now part ofİntel) were the FPGA market leaders.[37]At that time, they controlled nearly 90 percent of the market. Both Xilinx (now AMD) and Altera (now Intel) provideproprietaryelectronic design automationsoftware forWindowsandLinux(ISE/VivadoandQuartus) which enables engineers todesign, analyze,simulate, andsynthesize(compile) their designs.[38][39] In March 2010,Tabulaannounced their FPGA technology that usestime-multiplexedlogic and interconnect that claims potential cost savings for high-density applications.[40]On March 24, 2015, Tabula officially shut down.[41] On June 1, 2015, Intel announced it would acquire Altera for approximatelyUS$16.7 billion and completed the acquisition on December 30, 2015.[42] On October 27, 2020, AMD announced it would acquire Xilinx[43]and completed the acquisition valued at about US$50 billion in February 2022.[44] In February 2024 Altera became independent of Intel again.[45] Other manufacturers include: An FPGA can be used to solve any problem which iscomputable. FPGAs can be used to implement asoft microprocessor, such as the XilinxMicroBlazeor AlteraNios II. But their advantage lies in that they are significantly faster for some applications because of theirparallel natureandoptimalityin terms of the number of gates used for certain processes.[51] FPGAs were originally introduced as competitors toCPLDsto implementglue logicforprinted circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as fullsystems on chips(SoCs). Particularly with the introduction of dedicatedmultipliersinto FPGA architectures in the late 1990s, applications that had traditionally been the sole reserve ofdigital signal processors(DSPs) began to use FPGAs instead.[52][53] The evolution of FPGAs has motivated an increase in the use of these devices, whose architecture allows the development of hardware solutions optimized for complex tasks, such as 3D MRI image segmentation, 3D discrete wavelet transform, tomographic image reconstruction, or PET/MRI systems.[54][55]The developed solutions can perform intensive computation tasks with parallel processing, are dynamically reprogrammable, and have a low cost, all while meeting the hard real-time requirements associated with medical imaging. Another trend in the use of FPGAs ishardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a general-purpose processor. The search engineBingis noted for adopting FPGA acceleration for its search algorithm in 2014.[56]As of 2018[update], FPGAs are seeing increased use asAI acceleratorsincluding Microsoft's Project Catapult[11]and for acceleratingartificial neural networksformachine learningapplications. Originally,[when?]FPGAs were reserved for specificvertical applicationswhere the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. Often a custom-made chip would be cheaper if made in larger quantities, but FPGAs may be chosen to quickly bring a product to market. By 2017, new cost and performance dynamics broadened the range of viable applications.[citation needed] Other uses for FPGAs include: FPGAs play a crucial role in modern military communications, especially in systems like theJoint Tactical Radio System(JTRS) and in devices from companies such asThalesandHarris Corporation. Their flexibility and programmability make them ideal for military communications, offering customizable and secure signal processing. In the JTRS, used by the US military, FPGAs provide adaptability and real-time processing, crucial for meeting various communication standards and encryption methods.[63] FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerninghardware security. FPGAs' flexibility makes malicious modifications duringfabricationa lower risk.[64]Previously, for many FPGAs, the designbitstreamwas exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstreamencryptionandauthentication. For example,AlteraandXilinxofferAESencryption (up to 256-bit) for bitstreams stored in an external flash memory.Physical unclonable functions(PUFs) are integrated circuits that have their own unique signatures, due to processing, and can also be used to secure FPGAs while taking up very little hardware space.[65] FPGAs that store their configuration internally in nonvolatile flash memory, such asMicrosemi's ProAsic 3 orLattice's XP2 programmable devices, do not expose the bitstream and do not needencryption. In addition, flash memory for alookup tableprovidessingle event upsetprotection for space applications.[clarification needed]Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such asMicrosemi. With its Stratix 10 FPGAs and SoCs,Alteraintroduced a Secure Device Manager andphysical unclonable functionsto provide high levels of protection against physical attacks.[66] In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that some FPGAs can be vulnerable to hostile intent. They discovered a criticalbackdoorvulnerabilityhad been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto andaccess keys, accessing unencrypted bitstream, modifyinglow-levelsilicon features, and extractingconfigurationdata.[67] In 2020 a critical vulnerability (named "Starbleed") was discovered in all Xilinx 7series FPGAs that rendered bitstream encryption useless. There is no workaround. Xilinx did not produce a hardware revision. Ultrascale and later devices, already on the market at the time, were not affected. Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations.[68] Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fixbugs, and often include shortertime to marketand lowernon-recurring engineeringcosts. Vendors can also take a middle road viaFPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs.[69]Some FPGAs have the capability ofpartial re-configurationthat lets one portion of the device be re-programmed while other portions continue running.[70][71] The primary differences betweencomplex programmable logic devices(CPLDs) and FPGAs arearchitectural. A CPLD has a comparatively restrictive structure consisting of one or more programmablesum-of-productslogic arrays feeding a relatively small number of clockedregisters. As a result, CPLDs are less flexible but have the advantage of more predictabletiming delaysanda higher logic-to-interconnect ratio.[citation needed]FPGA architectures, on the other hand, are dominated byinterconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complexelectronic design automation(EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complexembedded functionssuch asadders,multipliers,memory, andserializer/deserializers. Another common distinction is that CPLDs contain embeddedflash memoryto store their configuration while FPGAs usually require externalnon-volatile memory(but not always). When a design requires simple instant-on(logic is already configured at power-up)CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions and are responsible for "booting" the FPGA as well as controllingresetand boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design.[72]
https://en.wikipedia.org/wiki/Field-programmable_gate_array
Inelectronics,flip-flopsandlatchesarecircuitsthat have two stable states that can store state information – abistable multivibrator. The circuit can be made to change state bysignalsapplied to one or more control inputs and will output its state (often along with itslogical complementtoo). It is the basic storage element insequential logic. Flip-flops and latches are fundamental building blocks ofdigital electronicssystems used in computers, communications, and many other types of systems. Flip-flops and latches are used as data storage elements to store a singlebit(binary digit) of data; one of its two states represents a "one" and the other represents a "zero". Such data storage can be used for storage ofstate, and such a circuit is described assequential logicin electronics. When used in afinite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal. The term flip-flop has historically referred generically to both level-triggered (asynchronous, transparent, or opaque) and edge-triggered (synchronous, orclocked) circuits that store a single bit of data usinggates.[1]Modern authors reserve the termflip-flopexclusively for edge-triggered storage elements andlatchesfor level-triggered ones.[2][3]The terms "edge-triggered", and "level-triggered" may be used to avoid ambiguity.[4] When a level-triggered latch is enabled it becomes transparent, but an edge-triggered flip-flop's output only changes on a clock edge (either positive going or negative going). Different types of flip-flops and latches are available asintegrated circuits, usually with multiple elements per chip. For example,74HC75is a quadruple transparent latch in the7400 series. The first electronic latch was invented in 1918 by the British physicistsWilliam EcclesandF. W. Jordan.[5][6]It was initially called theEccles–Jordan trigger circuitand consisted of two active elements (vacuum tubes).[7]The design was used in the 1943 BritishColossus codebreaking computer[8]and such circuits and their transistorized versions were common in computers even after the introduction ofintegrated circuits, though latches and flip-flops made fromlogic gatesare also common now.[9][10]Early latches were known variously as trigger circuits ormultivibrators. According to P. L. Lindley, an engineer at the USJet Propulsion Laboratory, the flip-flop types detailed below (SR, D, T, JK) were first discussed in a 1954UCLAcourse on computer design by Montgomery Phister, and then appeared in his bookLogical Design of Digital Computers.[11][12]Lindley was at the time working atHughes Aircraftunder Eldred Nelson, who had coined the term JK for a flip-flop which changed states when both inputs were on (a logical "one"). The other names were coined by Phister. They differ slightly from some of the definitions given below. Lindley explains that he heard the story of the JK flip-flop from Eldred Nelson, who is responsible for coining the term while working at Hughes Aircraft. Flip-flops in use at Hughes at the time were all of the type that came to be known as J-K. In designing a logical system, Nelson assigned letters to flip-flop inputs as follows: #1: A & B, #2: C & D, #3: E & F, #4: G & H, #5: J & K. Nelson used the notations "j-input" and "k-input" in a patent application filed in 1953.[13] Transparent or asynchronous latches can be built around a single pair of cross-coupled inverting elements:vacuum tubes,bipolar transistors,field-effect transistors,inverters, and invertinglogic gateshave all been used in practical circuits. Clocked flip-flops are specially designed for synchronous systems; such devices ignore their inputs except at the transition of a dedicated clock signal (known as clocking, pulsing, or strobing). Clocking causes the flip-flop either to change or to retain its output signal based upon the values of the input signals at the transition. Some flip-flops change output on the risingedgeof the clock, others on the falling edge. Since the elementary amplifying stages are inverting, two stages can be connected in succession (as a cascade) to form the needed non-inverting amplifier. In this configuration, each amplifier may be considered as an active inverting feedback network for the other inverting amplifier. Thus the two stages are connected in a non-inverting loop although the circuit diagram is usually drawn as a symmetric cross-coupled pair (both theclass=skin-invert-image|drawingsare initially introduced in the Eccles–Jordan patent). Flip-flops and latches can be divided into common types:SR("set-reset"),D("data"),T("toggle"), andJK(see History section above). The behavior of a particular type can be described by the characteristic equation that derives the "next" output (Qnext) in terms of the input signal(s) and/or the current output,Q{\displaystyle Q}. When using static gates as building blocks, the most fundamental latch is the asynchronousSet-Reset (SR) latch. Its two inputs S and R can set the internal state to 1 using the combination S=1 and R=0, and can reset the internal state to 0 using the combination S=0 and R=1.[note 1] The SR latch can be constructed from a pair of cross-coupledNORorNANDlogic gates. The stored bit is present on the output marked Q. It is convenient to think of NAND, NOR, AND and OR as controlled operations, where one input is chosen as the control input set and the other bit as the input to be processed depending on the state of the control. Then, all of these gates have one control value that ignores the input (x) and outputs a constant value, while the other control value lets the input pass (maybe complemented): Essentially, they can all be used as switches that either set a specific value or let an input value pass. The SR NOR latch consists of two parallel NOR gates where the output of each NOR is also fanned out into one input of the other NOR, as shown in the figure. We call these output-to-input connectionsfeedbackinputs, or simplyfeedbacks. The remaining inputs we will use ascontrolinputs as explained above. Notice that at this point, because everything is symmetric, it does not matter to which inputs the outputs are connected. We now break the symmetry by choosing which of the remaining control inputs will be our set and reset and we can call "set NOR" the NOR gate with the set control and "reset NOR" the NOR with the reset control; in the figures the set NOR is the bottom one and the reset NOR is the top one. The output of the reset NOR will be our stored bit Q, while we will see that the output of the set NOR stores its complementQ. To derive the behavior of the SR NOR latch, consider S and R as control inputs and remember that, from the equations above, set and reset NOR with control 1 will fix their outputs to 0, while set and reset NOR with control 0 will act as a NOT gate. With this it is now possible to derive the behavior of the SR latch as simple conditions (instead of, for example, assigning values to each line see how they propagate): Note: X meansdon't care, that is, either 0 or 1 is a valid value. The R = S = 1 combination is called arestricted combinationor aforbidden statebecause, as both NOR gates then output zeros, it breaks the logical equation Q =notQ. The combination is also inappropriate in circuits wherebothinputs may go lowsimultaneously(i.e. a transition fromrestrictedtohold). The output could remain in ametastablestate and may eventually lock at either 1 or 0 depending on the propagation time relations between the gates (arace condition). To overcome the restricted combination, one can add gates to the inputs that would convert(S, R) = (1, 1)to one of the non-restricted combinations. That can be: This is done in nearly everyprogrammable logic controller. Alternatively, the restricted combination can be made totogglethe output. The result is theJK latch. The characteristic equation for the SR latch is: where A + B means (A or B), AB means (A and B) Another expression is: The circuit shown below is a basic NAND latch. The inputs are also generally designatedSandRfor Set and Reset respectively. Because the NAND inputs must normally be logic 1 to avoid affecting the latching action, the inputs are considered to be inverted in this circuit (or active low). The circuit uses the same feedback as SR NOR, just replacing NOR gates with NAND gates, to "remember" and retain its logical state even after the controlling input signals have changed. Again, recall that a 1-controlled NAND always outputs 0, while a 0-controlled NAND acts as a NOT gate. When the S and R inputs are both high, feedback maintains the Q outputs to the previous state. When either is zero, they fix their output bits to 0 while to other adapts to the complement. S=R=0 produces the invalid state. From a teaching point of view, SR latches drawn as a pair of cross-coupled components (transistors, gates, tubes, etc.) are often hard to understand for beginners. A didactically easier explanation is to draw the latch as a single feedback loop instead of the cross-coupling. The following is an SR latch built with anANDgate with oneinvertedinput and anORgate. Note that the inverter is not needed for the latch functionality, but rather to make both inputs High-active. Note that the SR AND-OR latch has the benefit that S = 1, R = 1 is well defined. In above version of the SR AND-OR latch it gives priority to the R signal over the S signal. If priority of S over R is needed, this can be achieved by connecting output Q to the output of the OR gate instead of the output of the AND gate. The SR AND-OR latch is easier to understand, because both gates can be explained in isolation, again with the control view of AND and OR from above. When neither S or R is set, then both the OR gate and the AND gate are in "hold mode", i.e., they let the input through, their output is the input from the feedback loop. When input S = 1, then the OR gate outputs 1, regardless of the other input from the feedback loop ("set mode"). When input R = 1 then the AND gate outputs 0, regardless of the other input from the feedback loop ("reset mode"). And since the AND gate takes the output of the OR gate as input, R has priority over S. Latches drawn as cross-coupled gates may look less intuitive, as the behavior of one gate appears to be intertwined with the other gate. The standard NOR or NAND latches could also be re-drawn with the feedback loop, but in their case the feedback loop does not show the same signal value throughout the whole feedback loop. However, the SR AND-OR latch has the drawback that it would need an extra inverter, if an inverted Q output is needed. Note that the SR AND-OR latch can be transformed into the SR NOR latch using logic transformations: inverting the output of the OR gate and also the 2nd input of the AND gate and connecting the inverted Q output between these two added inverters; with the AND gate with both inputs inverted being equivalent to a NOR gate according toDe Morgan's laws. The JK latch is much less frequently used than the JK flip-flop. The JK latch follows the following state table: Hence, the JK latch is an SR latch that is made totoggleits output (oscillate between 0 and 1) when passed the input combination of 11.[16]Unlike the JK flip-flop, the 11 input combination for the JK latch is not very useful because there is no clock that directs toggling.[17] Latches are designed to betransparent.That is, input signal changes cause immediate changes in output. Additional logic can be added to a transparent latch to make itnon-transparentoropaquewhen another input (an "enable" input) is not asserted. When severaltransparentlatches follow each other, if they are all transparent at the same time, signals will propagate through them all. However, following atransparent-highlatch by atransparent-lowlatch (or vice-versa) causes the state and output to only change on clock edges, forming what is called amaster–slave flip-flop. Agated SR latchcan be made by adding a second level of NAND gates to aninverted SR latch. The extra NAND gates further invert the inputs so aSRlatchbecomes agated SR latch(aSR latchwould transform into agatedSRlatchwith inverted enable). Alternatively, agated SR latch(with non-inverting enable) can be made by adding a second level of AND gates to aSR latch. With E high (enabletrue), the signals can pass through the input gates to the encapsulated latch; all signal combinations except for (0, 0) =holdthen immediately reproduce on the (Q,Q) output, i.e. the latch istransparent. With E low (enablefalse) the latch isclosed (opaque)and remains in the state it was left the last time E was high. A periodicenableinput signal may be called awrite strobe. When theenableinput is aclock signal, the latch is said to belevel-sensitive(to the level of the clock signal), as opposed toedge-sensitivelike flip-flops below. This latch exploits the fact that, in the two active input combinations (01 and 10) of a gated SR latch, R is the complement of S. The input NAND stage converts the two D input states (0 and 1) to these two input combinations for the nextSRlatch by inverting the data input signal. The low state of theenablesignal produces the inactive "11" combination. Thus a gated D-latch may be considered as aone-input synchronous SR latch. This configuration prevents application of the restricted input combination. It is also known astransparent latch,data latch, or simplygated latch. It has adatainput and anenablesignal (sometimes namedclock, orcontrol). The wordtransparentcomes from the fact that, when the enable input is on, the signal propagates directly through the circuit, from the input D to the output Q. Gated D-latches are alsolevel-sensitivewith respect to the level of the clock or enable signal. Transparent latches are typically used as I/O ports or in asynchronous systems, or in synchronous two-phase systems (synchronous systemsthat use atwo-phase clock), where two latches operating on different clock phases prevent data transparency as in a master–slave flip-flop. The truth table below shows that when theenable/clock input is 0, the D input has no effect on the output. When E/C is high, the output equals D. The classic gated latch designs have some undesirable characteristics.[18]They requiredual-rail logicor an inverter. The input-to-output propagation may take up to three gate delays. The input-to-output propagation is not constant – some outputs take two gate delays while others take three. Designers looked for alternatives.[19]A successful alternative is the Earle latch. It requires only a single data input, and its output takes a constant two gate delays. In addition, the two gate levels of the Earle latch can, in some cases, be merged with the last two gate levels of the circuits driving the latch because many common computational circuits have an OR layer followed by an AND layer as their last two levels. Merging the latch function can implement the latch with no additional gate delays.[18]The merge is commonly exploited in the design of pipelined computers, and, in fact, was originally developed by John G. Earle to be used in theIBM System/360 Model 91for that purpose.[20] The Earle latch is hazard free.[21]If the middle NAND gate is omitted, then one gets thepolarity hold latch, which is commonly used because it demands less logic.[21][22]However, it is susceptible tologic hazard. Intentionally skewing the clock signal can avoid the hazard.[22] The D flip-flop is widely used, and known as a "data" flip-flop. The D flip-flop captures the value of the D-input at a definite portion of the clock cycle (such as the rising edge of the clock). That captured value becomes the Q output. At other times, the output Q does not change.[23][24]The D flip-flop can be viewed as a memory cell, azero-order hold, or adelay line.[25] Truth table: (Xdenotes adon't carecondition, meaning the signal is irrelevant) Most D-type flip-flops in ICs have the capability to be forced to the set or reset state (which ignores the D and clock inputs), much like an SR flip-flop. Usually, the illegal S = R = 1 condition is resolved in D-type flip-flops. Setting S = R = 0 makes the flip-flop behave as described above. Here is the truth table for the other possible S and R configurations: These flip-flops are very useful, as they form the basis forshift registers, which are an essential part of many electronic devices. The advantage of the D flip-flop over the D-type "transparent latch" is that the signal on the D input pin is captured the moment the flip-flop is clocked, and subsequent changes on the D input will be ignored until the next clock event. An exception is that some flip-flops have a "reset" signal input, which will reset Q (to zero), and may be either asynchronous or synchronous with the clock. The above circuit shifts the contents of the register to the right, one bit position on each active transition of the clock. The input X is shifted into the leftmost bit position. This circuit[26]consists of two stages implemented bySRNAND latches. The input stage (the two latches on the left) processes the clock and data signals to ensure correct input signals for the output stage (the single latch on the right). If the clock is low, both the output signals of the input stage are high regardless of the data input; the output latch is unaffected and it stores the previous state. When the clock signal changes from low to high, only one of the output voltages (depending on the data signal) goes low and sets/resets the output latch: if D = 0, the lower output becomes low; if D = 1, the upper output becomes low. If the clock signal continues staying high, the outputs keep their states regardless of the data input and force the output latch to stay in the corresponding state as the input logical zero (of the output stage) remains active while the clock is high. Hence the role of the output latch is to store the data only while the clock is low. The circuit is closely related to thegated D latchas both the circuits convert the two D input states (0 and 1) to two input combinations (01 and 10) for the outputSRlatch by inverting the data input signal (both the circuits split the single D signal in two complementarySandRsignals). The difference is that NAND logical gates are used in the gated D latch, whileSRNAND latches are used in the positive-edge-triggered D flip-flop. The role of these latches is to "lock" the active output producing low voltage (a logical zero); thus the positive-edge-triggered D flip-flop can also be thought of as a gated D latch with latched input gates. A master–slave D flip-flop is created by connecting twogated D latchesin series, and inverting theenableinput to one of them. It is called master–slave because the master latch controls the slave latch's output value Q and forces the slave latch to hold its value whenever the slave latch is enabled, as the slave latch always copies its new value from the master latch and changes its value only in response to a change in the value of the master latch and clock signal. For a positive-edge triggered master–slave D flip-flop, when the clock signal is low (logical 0) the "enable" seen by the first or "master" D latch (the inverted clock signal) is high (logical 1). This allows the "master" latch to store the input value when the clock signal transitions from low to high. As the clock signal goes high (0 to 1) the inverted "enable" of the first latch goes low (1 to 0) and the value seen at the input to the master latch is "locked". Nearly simultaneously, the twice inverted "enable" of the second or "slave" D latch transitions from low to high (0 to 1) with the clock signal. This allows the signal captured at the rising edge of the clock by the now "locked" master latch to pass through the "slave" latch. When the clock signal returns to low (1 to 0), the output of the "slave" latch is "locked", and the value seen at the last rising edge of the clock is held while the "master" latch begins to accept new values in preparation for the next rising clock edge. Removing the leftmost inverter in the circuit creates a D-type flip-flop that strobes on thefalling edgeof a clock signal. This has a truth table like this: Flip-Flops that read in a new value on the rising and the falling edge of the clock are called dual-edge-triggered flip-flops. Such a flip-flop may be built using two single-edge-triggered D-type flip-flops and a multiplexer, or by using two single-edge triggered D-type flip-flops and three XOR gates. An efficient functional alternative to a D flip-flop can be made with dynamic circuits (where information is stored in a capacitance) as long as it is clocked often enough; while not a true flip-flop, it is still called a flip-flop for its functional role. While the master–slave D element is triggered on the edge of a clock, its components are each triggered by clock levels. The "edge-triggered D flip-flop", as it is called even though it is not a true flip-flop, does not have the master–slave properties. Edge-triggered D flip-flops are often implemented in integrated high-speed operations usingdynamic logic. This means that the digital output is stored on parasitic device capacitance while the device is not transitioning. This design facilitates resetting by simply discharging one or more internal nodes. A common dynamic flip-flop variety is the true single-phase clock (TSPC) type which performs the flip-flop operation with little power and at high speeds. However, dynamic flip-flops will typically not work at static or low clock speeds: given enough time, leakage paths may discharge the parasitic capacitance enough to cause the flip-flop to enter invalid states. If the T input is high, the T flip-flop changes state ("toggles") whenever the clock input is strobed. If the T input is low, the flip-flop holds the previous value. This behavior is described by the characteristicequation: and can be described in atruth table: When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if clock frequency is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz. This "divide by" feature has application in various types of digitalcounters. A T flip-flop can also be built using a JK flip-flop (J & K pins are connected together and act as T) or a D flip-flop (T input XOR Qpreviousdrives the D input). The JK flip-flop augments the behavior of the SR flip-flop (J: Set, K: Reset) by interpreting the J = K = 1 condition as a "flip" or toggle command. Specifically, the combination J = 1, K = 0 is a command to set the flip-flop; the combination J = 0, K = 1 is a command to reset the flip-flop; and the combination J = K = 1 is a command to toggle the flip-flop, i.e., change its output to the logical complement of its current value. Setting J = K = 0 maintains the current state. To synthesize a D flip-flop, simply set K equal to the complement of J (input J will act as input D). Similarly, to synthesize a T flip-flop, set K equal to J. The JK flip-flop is therefore a universal flip-flop, because it can be configured to work as an SR flip-flop, a D flip-flop, or a T flip-flop. The characteristic equation of the JK flip-flop is: and the corresponding truth table is: The input must be held steady in a period around the rising edge of the clock known as the aperture. Imagine taking a picture of a frog on a lily-pad.[28]Suppose the frog then jumps into the water. If you take a picture of the frog as it jumps into the water, you will get a blurry picture of the frog jumping into the water—it's not clear which state the frog was in. But if you take a picture while the frog sits steadily on the pad (or is steadily in the water), you will get a clear picture. In the same way, the input to a flip-flop must be held steady during theapertureof the flip-flop. Setup timeis the minimum amount of time the data input should be held steadybeforethe clock event, so that the data is reliably sampled by the clock. Hold timeis the minimum amount of time the data input should be held steadyafterthe clock event, so that the data is reliably sampled by the clock. Apertureis the sum of setup and hold time. The data input should be held steady throughout this time period.[28] Recovery timeis the minimum amount of time the asynchronous set or reset input should be inactivebeforethe clock event, so that the data is reliably sampled by the clock. The recovery time for the asynchronous set or reset input is thereby similar to the setup time for the data input. Removal timeis the minimum amount of time the asynchronous set or reset input should be inactiveafterthe clock event, so that the data is reliably sampled by the clock. The removal time for the asynchronous set or reset input is thereby similar to the hold time for the data input. Short impulses applied to asynchronous inputs (set, reset) should not be applied completely within the recovery-removal period, or else it becomes entirely indeterminable whether the flip-flop will transition to the appropriate state. In another case, where an asynchronous signal simply makes one transition that happens to fall between the recovery/removal time, eventually the flip-flop will transition to the appropriate state, but a very short glitch may or may not appear on the output, dependent on the synchronous input signal. This second situation may or may not have significance to a circuit design. Set and Reset (and other) signals may be either synchronous or asynchronous and therefore may be characterized with either Setup/Hold or Recovery/Removal times, and synchronicity is very dependent on the design of the flip-flop. Differentiation between Setup/Hold and Recovery/Removal times is often necessary when verifying the timing of larger circuits because asynchronous signals may be found to be less critical than synchronous signals. The differentiation offers circuit designers the ability to define the verification conditions for these types of signals independently. Flip-flops are subject to a problem calledmetastability, which can happen when two inputs, such as data and clock or clock and reset, are changing at about the same time. When the order is not clear, within appropriate timing constraints, the result is that the output may behave unpredictably, taking many times longer than normal to settle to one state or the other, or even oscillating several times before settling. Theoretically, the time to settle down is not bounded. In acomputersystem, this metastability can cause corruption of data or a program crash if the state is not stable before another circuit uses its value; in particular, if two different logical paths use the output of a flip-flop, one path can interpret it as a 0 and the other as a 1 when it has not resolved to stable state, putting the machine into an inconsistent state.[29] The metastability in flip-flops can be avoided by ensuring that the data and control inputs are held valid and constant for specified periods before and after the clock pulse, called thesetup time(tsu) and thehold time(th) respectively. These times are specified in the data sheet for the device, and are typically between a few nanoseconds and a few hundred picoseconds for modern devices. Depending upon the flip-flop's internal organization, it is possible to build a device with a zero (or even negative) setup or hold time requirement but not both simultaneously. Unfortunately, it is not always possible to meet the setup and hold criteria, because the flip-flop may be connected to a real-time signal that could change at any time, outside the control of the designer. In this case, the best the designer can do is to reduce the probability of error to a certain level, depending on the required reliability of the circuit. One technique for suppressing metastability is to connect two or more flip-flops in a chain, so that the output of each one feeds the data input of the next, and all devices share a common clock. With this method, the probability of a metastable event can be reduced to a negligible value, but never to zero. The probability of metastability gets closer and closer to zero as the number of flip-flops connected in series is increased. The number of flip-flops being cascaded is referred to as the "ranking"; "dual-ranked" flip flops (two flip-flops in series) is a common situation. So-called metastable-hardened flip-flops are available, which work by reducing the setup and hold times as much as possible, but even these cannot eliminate the problem entirely. This is because metastability is more than simply a matter of circuit design. When the transitions in the clock and the data are close together in time, the flip-flop is forced to decide which event happened first. However fast the device is made, there is always the possibility that the input events will be so close together that it cannot detect which one happened first. It is therefore logically impossible to build a perfectly metastable-proof flip-flop. Flip-flops are sometimes characterized for a maximum settling time (the maximum time they will remain metastable under specified conditions). In this case, dual-ranked flip-flops that are clocked slower than the maximum allowed metastability time will provide proper conditioning for asynchronous (e.g., external) signals. Another important timing value for a flip-flop is the clock-to-output delay (common symbol in data sheets: tCO) orpropagation delay(tP), which is the time a flip-flop takes to change its output after the clock edge. The time for a high-to-low transition (tPHL) is sometimes different from the time for a low-to-high transition (tPLH). When cascading flip-flops which share the same clock (as in ashift register), it is important to ensure that the tCOof a preceding flip-flop is longer than the hold time (th) of the following flip-flop, so data present at the input of the succeeding flip-flop is properly "shifted in" following the active edge of the clock. This relationship between tCOand this normally guaranteed if the flip-flops are physically identical. Furthermore, for correct operation, it is easy to verify that the clock period has to be greater than the sum tsu+ th. Flip-flops can be generalized in at least two ways: by making them 1-of-N instead of 1-of-2, and by adapting them to logic with more than two states. In the special cases of 1-of-3 encoding, or multi-valuedternary logic, such an element may be referred to as aflip-flap-flop.[30] In a conventional flip-flop, exactly one of the two complementary outputs is high. This can be generalized to a memory element with N outputs, exactly one of which is high (alternatively, where exactly one of N is low). The output is therefore always aone-hot(respectivelyone-cold) representation. The construction is similar to a conventional cross-coupled flip-flop; each output, when high, inhibits all the other outputs.[31]Alternatively, more or less conventional flip-flops can be used, one per output, with additional circuitry to make sure only one at a time can be true.[32] Another generalization of the conventional flip-flop is a memory element formulti-valued logic. In this case the memory element retains exactly one of the logic states until the control inputs induce a change.[33]In addition, a multiple-valued clock can also be used, leading to new possible clock transitions.[34]
https://en.wikipedia.org/wiki/Flip-flop_(electronics)
Integrated injection logic(IIL,I2L, orI2L) is a class ofdigital circuitsbuilt with multiple collectorbipolar junction transistors(BJT).[1]When introduced it had speed comparable toTTLyet was almost as low power asCMOS, making it ideal for use inVLSI(and larger)integrated circuits. The gates can be made smaller with this logic family than with CMOS because complementary transistors are not needed. Although the logic voltage levels are very close (High: 0.7V, Low: 0.2V), I2L has high noise immunity because it operates by current instead of voltage. I2L was developed in 1971 bySiegfried K. WiedmannandHorst H. Bergerwho originally called itmerged-transistor logic(MTL).[2]A disadvantage of this logic family is that the gates draw power when not switching unlike with CMOS. The I2L inverter gate is constructed with aPNPcommon base current source transistor and anNPNcommon emitter open collector inverter transistor (i.e. they are connected to the GND). On a wafer, these two transistors are merged. A small voltage (around 1 volts) is supplied to the emitter of the current source transistor to control the current supplied to the inverter transistor. Transistors are used for current sources on integrated circuits because they are much smaller than resistors. Because the inverter is open collector, awired AND operationmay be performed by connecting an output from each of two or more gates together. Thus thefan-outof an output used in such a way is one. However, additional outputs may be produced by adding more collectors to the inverter transistor. The gates can be constructed very simply with just a single layer of interconnect metal. In a discrete implementation of an I2L circuit, bipolar NPN transistors with multiple collectors can be replaced with multiple discrete 3-terminal NPN transistors connected in parallel having their bases connected together and their emitters connected likewise. The current source transistor may be replaced with a resistor from the positive supply to the base of the inverter transistor, since discrete resistors are smaller and less expensive than discrete transistors. Similarly, the merged PNP current injector transistor and the NPN inverter transistor can be implemented as separate discrete components. The heart of an I2L circuit is the common emitter open collector inverter. Typically, an inverter consists of an NPN transistor with the emitter connected to ground and the base biased with a forwardcurrentfrom the current source. The input is supplied to the base as either a current sink (low logic level) or as a high-z floating condition (high logic level). The output of an inverter is at the collector. Likewise, it is either a current sink (low logic level) or a high-z floating condition (high logic level). Likedirect-coupled transistor logic, there is no resistor between the output (collector) of one NPN transistor and the input (base) of the following transistor. To understand how the inverter operates, it is necessary to understand the current flow. If the bias current is shunted to ground (low logic level), the transistor turns off and the collector floats (high logic level). If the bias current is not shunted to ground because the input is high-z (high logic level), the bias current flows through the transistor to the emitter, switching on the transistor, and allowing the collector to sink current (low logic level). Because the output of the inverter can sink current but cannot source current, it is safe to connect the outputs of multiple inverters together to form a wired AND gate. When the outputs of two inverters are wired together, the result is a two-input NOR gate because the configuration (NOT A) AND (NOT B) is equivalent to NOT (A OR B) (perDe Morgan's Theorem). Finally the output of the NOR gate is inverted by IIL inverter in upper right of the diagram, the result is a two-input OR gate. Due to internal parasitic capacitance in transistors, higher currents sourced into the base of the inverter transistor result in faster switching speeds, and since the voltage difference between high and low logic levels is smaller for I2L than other bipolar logic families (around 0.5 volts instead of around 3.3 or 5 volts), losses due to charging and discharging parasitic capacitances are minimized. I2L is relatively simple to construct on anintegrated circuit, and was commonly used before the advent ofCMOSlogic by companies such asMotorola(nowNXP Semiconductors)[3]andTexas Instruments. In 1975,Sinclair Radionicsintroduced one of the first consumer-grade digital watches, theBlack Watch, which used I2L technology.[4]In 1976, Texas Instruments introducedSBP0400CPU which used I2L technology. In the late 1970s, RCA used I²L in its CA3162 ADC 3 digit meter integrated circuit. In 1979, HP introduced a frequency measurement instrument based on a HP-made custom LSI chip that uses integrated injection logic (I2L) forlow power consumptionand high density, enabling portable battery operation, and also some emitter function logic (EFL) circuits where high speed is needed in its HP 5315A/B.[5]
https://en.wikipedia.org/wiki/Integrated_injection_logic
AKarnaugh map(KMorK-map) is a diagram that can be used to simplify aBoolean algebraexpression.Maurice Karnaughintroduced the technique in 1953[1][2]as a refinement ofEdward W. Veitch's 1952Veitch chart,[3][4]which itself was a rediscovery ofAllan Marquand's 1881logical diagram[5][6]orMarquand diagram.[4]They are also known asMarquand–Veitch diagrams,[4]Karnaugh–Veitch (KV) maps, and (rarely)Svoboda charts.[7]An early advance in the history offormal logicmethodology, Karnaugh maps remain relevant in the digital age, especially in the fields oflogical circuitdesign anddigital engineering.[4] A Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability.[1]It also permits the rapid identification and elimination of potentialrace conditions.[clarification needed] The required Boolean results are transferred from atruth tableonto a two-dimensional grid where, in Karnaugh maps, the cells are ordered inGray code,[8][4]and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the Boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of acanonical formof the logic in the original truth table.[9]These terms can be used to write a minimal Boolean expression representing the required logic. Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using the minimal number oflogic gates. Asum-of-products expression(SOP) can always be implemented usingAND gatesfeeding into anOR gate, and aproduct-of-sums expression(POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F').[10]Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example inconditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.[11] Karnaugh maps are used to facilitate the simplification ofBoolean algebrafunctions. For example, consider the Boolean function described by the followingtruth table. Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variablesA,B,C,Dand their inverses. In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid. The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered inGray coderather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs. After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — acanonical form— for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown. The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example,ADwould mean a cell which covers the 2x2 area whereAandDare true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand,ADwould mean the cells whereAis true andDis false (that is,Dis true). The grid istoroidallyconnected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore,ADcan be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as isBD, which includes the four corners. Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box. For the red grouping: Thus the first minterm in the Boolean sum-of-products expression isAC. For the green grouping,AandBmaintain the same state, whileCandDchange.Bis 0 and has to be negated before it can be included. The second term is thereforeAB. Note that it is acceptable that the green grouping overlaps with the red one. In the same way, the blue grouping gives the termBCD. The solutions of each grouping are combined: the normal form of the circuit isAC¯+AB¯+BCD¯{\displaystyle A{\overline {C}}+A{\overline {B}}+BC{\overline {D}}}. Thus the Karnaugh map has guided a simplification of It would also have been possible to derive this simplification by carefully applying theaxioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms. The inverse of a function is solved in the same way by grouping the 0s instead.[nb 1] The three terms to cover the inverse are all shown with grey boxes with different colored borders: This yields the inverse: Through the use ofDe Morgan's laws, theproduct of sumscan be determined: Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X. The example on the right is the same as the example above but with the value off(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely. This yields the new minimum equation: Note that the first term is justA, notAC. In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards). The inverse case is simplified as follows: Through the use ofDe Morgan's laws, theproduct of sumscan be determined: Karnaugh maps are useful for detecting and eliminatingrace conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding,adjacenthas a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides. Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic. In our case, an additional term ofAD¯{\displaystyle A{\overline {D}}}would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram. The term isredundantin terms of the static logic of the system, but such redundant, orconsensus terms, are often needed to assure race-free dynamic performance. Similarly, an additional term ofA¯D{\displaystyle {\overline {A}}D}must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression forf, but with a new factor of(A+D¯){\displaystyle \left(A+{\overline {D}}\right)}. The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of∑m(){\textstyle \sum m()}and the race hazard free (seeprevious section) minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field. A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more. Related graphical minimization methods include:
https://en.wikipedia.org/wiki/Karnaugh_map
Inautomata theory,combinational logic(also referred to astime-independent logic[1]) is a type ofdigital logicthat is implemented byBoolean circuits, where the output is apure functionof the present input only. This is in contrast tosequential logic, in which the output depends not only on the present input but also on the history of the input. In other words, sequential logic hasmemorywhile combinational logic does not. Combinational logic is used incomputercircuits to performBoolean algebraon input signals and on stored data. Practical computer circuits normally contain a mixture of combinational and sequential logic. For example, the part of anarithmetic logic unit, or ALU, that does mathematical calculations is constructed using combinational logic. Other circuits used in computers, such ashalf adders,full adders,half subtractors,full subtractors,multiplexers,demultiplexers,encodersanddecodersare also made by using combinational logic. Practical design of combinational logic systems may require consideration of the finite time required for practical logical elements to react to changes in their inputs. Where an output is the result of the combination of several different paths with differing numbers of switching elements, the output may momentarily change state before settling at the final state, as the changes propagate along different paths.[2] Combinational logic is used to build circuits that produce specified outputs from certain inputs. The construction of combinational logic is generally done using one of two methods: a sum of products, or a product of sums. Consider the followingtruth table: Using sum of products, all logical statements which yield true results are summed, giving the result: UsingBoolean algebra, the result simplifies to the following equivalent of the truth table: Minimization (simplification) of combinational logic formulas is done using the following rules based on thelaws of Boolean algebra: With the use of minimization (sometimes calledlogic optimization), a simplified logical function or circuit may be arrived upon, and the logiccombinational circuitbecomes smaller, and easier to analyse, use, or build.
https://en.wikipedia.org/wiki/Combinational_logic
The following is alist ofCMOS4000-seriesdigital logicintegrated circuits. In 1968, the original 4000-series was introduced byRCA. Although more recent parts are considerably faster, the 4000 devices operate over a wide power supply range (3V to 18V recommended range for "B" series) and are well suited to unregulated battery powered applications and interfacing with sensitive analogue electronics, where the slower operation may be anEMCadvantage. The earlier datasheets included the internal schematics of the gate architectures and a number of novel designs are able to 'mis-use' this additional information to provide semi-analog functions for timing skew and linear signal amplification.[1]Due to the popularity of these parts, other manufacturers released pin-to-pin compatible logic devices and kept the 4000 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers, and not all devices are available from all sources or in all package sizes. Non-exhaustive list of manufacturers which make or have made these kind of ICs. Current manufacturers of these ICs: Former manufacturers of these ICs: Since there are numerous 4000-series parts, this section groups related combinational logic parts to make it easier for the reader to choose part numbers. All parts in this section have normal inputs and push-pull outputs, unless stated differently. One inputvoltage translationgates: One input logic gates: Two to eight input logic gates: AND-OR-Invert(AOI) logic gates: This list consists mostly of part numbers from a1983 RCA databook, though the leading "CD" and tailing letters (A, B, UB) have been removed for generic part number use. The numeric portion of part numbers from some manufactures may not be identical to generic part numbers in this table. Motorola typically prepended a "1" and removed the first "0" from part numbers within the range of 40100 to 40199, such as RCA CD40174B becomes Motorola MC14174B.
https://en.wikipedia.org/wiki/List_of_4000_series_integrated_circuits
The following is alist of 7400-series digital logic integrated circuits. In the mid-1960s, the original7400-seriesintegrated circuitswere introduced byTexas Instrumentswith the prefix "SN" to create the name SN74xx. Due to the popularity of these parts, other manufacturers released pin-to-pin compatiblelogicdevices and kept the 7400 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers. Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with54instead of74in the part number.[1] A short-lived64prefix on Texas Instruments parts indicated an industrial temperature range; this prefix had been dropped from the TI literature by 1973. Most recent 7400-series parts are fabricated inCMOSorBiCMOStechnology rather than TTL. Surface-mount parts with a single gate (often in a 5-pin or 6-pin package) are prefixed with741Ginstead of74. Some manufacturers released some4000-seriesequivalent CMOS circuits with a 74 prefix, for example, the 74HC4066[2]was a replacement for the 4066 with slightly different electrical characteristics (different power-supply voltage ratings, higher frequency capabilities, lower "on" resistances in analog switches, etc.). SeeList of 4000-series integrated circuits. Conversely, the 4000-series has "borrowed" from the 7400 series – such as the CD40193 and CD40161 being pin-for-pinfunctionalreplacements for 74C193 and 74C161. Older TTL parts made by manufacturers such asSignetics,Motorola,MullardandSiemensmay have different numeric prefix and numbering series entirely, such as in the European FJ family FJH101 is an 8-inputNAND gatelike a 7430. A few alphabetic characters to designate a specificlogic subfamilymay immediately follow the74or54in the part number, e.g., 74LS74 for low-powerSchottky. Some CMOS parts such as 74HCT74 for high-speedCMOSwith TTL-compatible input thresholds are functionally similar to the TTL part. Not all functions are available in all families. The generic descriptive feature of these alphabetic characters was diluted by various companies participating in the market at its peak and are not always consistent especially with more recent offerings. The National Semiconductor trademarks of the words FAST[3]and FACT[4]are usually cited in the descriptions from other companies when describing their own unique designations.[5][6] In a few instances, such as the 7478 and 74107, the same suffix in different families do not have completely equivalent logic functions. Another extension to the series is the7416xxxvariant, representing mostly the 16-bit-wide counterpart of otherwise 8-bit-wide "base" chips with the same three ending digits. Thus e.g. a "7416373" would be the 16-bit-wide equivalent of a "74373". Some 7416xxx parts, however, do not have a direct counterpart from the standard 74xxx range but deliver new functionality instead, which needs making use of the 7416xxx series' higher pin count. For more details, refer primarily to the Texas Instruments documentation mentioned in theReferencessection. For CMOS (AC, HC, etc.) subfamilies, read "open drain" for "open collector" in the table below. There are a few numeric suffixes that have multiple conflicting assignments, such as the 74453. Since there are numerous 7400-series parts, the following groups related parts to make it easier to pick a useful part number. This section only includes combinational logic gates. For part numbers in this section, "x" is the7400-series logic family, such as LS, ALS, HCT, AHCT, HC, AHC, LVC, ... Parts in this section have a pin count of 14 pins or more. The lower part numbers were established in the 1960s and 1970s, then higher part numbers were added incrementally over decades. IC manufacturers continue to make a core subset of this group, but many of these part numbers are considered obsolete and no longer manufactured. Older discontinued parts may be available from a limited number of sellers asnew old stock(NOS), though some are much harder to find. For the following table: The widebus range in the 74xxx series includes higher-numbered parts like the 7416xxx and others, designed for extended functionality beyond standard chips. These components often feature 16-bit or wider data handling, serving as direct expansions of existing 8-bit designs (e.g., 74373 to 7416373) or introducing entirely new capabilities. They utilize higher pin counts to support larger data buses, advanced operations, and scalable digital logic solutions for more complex circuit requirements. As board designs have migrated away from large amounts of logic chips, so has the need for many of the same gate in one package. Since about 1996,[12]there has been an ongoing trend towards one / two / three logic gates per chip. Now logic can be placed where it is physically needed on a board, instead of running long signal traces to a full-size logic chip that has many of the same gate.[13][14] All chips in the following sections are available 5- to 10-pinsurface-mount packages. The right digits, after the 1G/2G/3G, typically has the same functional features as older legacy chips, except for the multifunctional chips and 4-digit chip numbers, which are unique to these newer families. The "x" in the part number is a place holder for the logic family name. For example, 74x1G14 in "LVC" logic family would be "74LVC1G14". The previously stated prefixes of "SN-" and "MC-" are used to denote manufacturers, Texas Instruments and ON Semiconductor respectively.[14][15][16] Some of the manufacturers that make these smaller IC chips are:Diodes Incorporated,Nexperia(NXP Semiconductors),ON Semiconductor(Fairchild Semiconductor),Texas Instruments(National Semiconductor),Toshiba.[14] Thelogic familiesavailable in small footprints are: AHC, AHCT, AUC, AUP, AXP, HC, HCT, LVC, VHC, NC7S, NC7ST, NC7SU, NC7SV. The LVC family is very popular in small footprints because it supports the most common logic voltages of 1.8 V, 3.3 V, 5 V, its inputs are 5 V tolerant when the device is powered at a lower voltage, and an output drive of 24 mA. Gates that are commonly available across most small footprint families are 00, 02, 04, 08, 14, 32, 86, 125, 126.[14] Chips in this section typically contain the number of units noted by the number immediately before the 'G' in their prefix (e.g. 2G = 2 gates). All chips in this section havetwopower-supply pins to translate unidirectional logic signals between two different logic voltages. The logic families that support dual-supply voltage translation are AVC, AVCH, AXC, AXCH, AXP, LVC, where the "H" in AVCH and AXCH means "bus hold" feature. Chips in the above table support the following voltage ranges on either power supply pin:
https://en.wikipedia.org/wiki/List_of_7400_series_integrated_circuits
Incomputer engineering, alogic familyis one of two related concepts: Before the widespread use of integrated circuits, various solid-state and vacuum-tube logic systems were used but these were never as standardized and interoperable as the integrated-circuit devices. The most common logic family in modernsemiconductor devicesismetal–oxide–semiconductor(MOS) logic, due to low power consumption,small transistor sizes, and hightransistor density. The list of packaged building-block logic families can be divided into categories, listed here in roughly chronological order of introduction, along with their usual abbreviations: The families RTL, DTL, and ECL were derived from the logic circuits used in early computers, originally implemented usingdiscrete components. One example is thePhilipsNORBITfamily of logic building blocks. The PMOS and I2L logic families were used for relatively short periods, mostly in special purpose customlarge-scale integrationcircuits devices, and are generally considered obsolete. For example, early digital clocks or electronic calculators may have used one or more PMOS devices to provide most of the logic for the finished product. TheF-14 Central Air Data Computer,Intel 4004,Intel 4040, andIntel 8008microprocessorsand their support chips were PMOS. Of these families, only ECL, TTL, NMOS, CMOS, and BiCMOS are currently still in widespread use. ECL is used for very high-speed applications because of its price and power demands, whileNMOS logicis mainly used inVLSIcircuits applications such as CPUs and memory chips which fall outside of the scope of this article. Present-day "building block" logic gate ICs are based on the ECL, TTL, CMOS, and BiCMOS families. Class ofdigital circuitsbuilt usingresistorsas the input network andbipolar junction transistors(BJTs) as switching devices. TheAtanasoff–Berry Computerused resistor-coupledvacuum tubelogic circuits similar to RTL. Several earlytransistorizedcomputers (e.g.,IBM 1620, 1959) used RTL, where it was implemented using discrete components. A family of simple resistor–transistor logic integrated circuits was developed atFairchild Semiconductorfor theApollo Guidance Computerin 1962.Texas Instrumentssoon introduced its own family of RTL. A variant with integrated capacitors, RCTL, had increased speed, but lower immunity to noise than RTL. This was made by Texas Instruments as their "51XX" series. Class of digital circuits in which the logic gating function (e.g., AND) is performed by a diode network and the amplifying function is performed by a transistor. Diode logicwas used with vacuum tubes in the earliest electronic computers in the 1940s includingENIAC. Diode–transistor logic (DTL) was used in theIBM 608, which was the first all-transistorized computer. Early transistorized computers were implemented using discrete transistors, resistors, diodes and capacitors. The first diode–transistor logic family of integrated circuits was introduced bySigneticsin 1962. DTL was also made by Fairchild andWestinghouse. A family of diode logic and diode–transistor logic integrated circuits was developed byTexas Instrumentsfor theD-37CMinuteman II Guidance Computerin 1962, but these devices were not available to the public. A variant of DTL called "high-threshold logic" incorporatedZener diodesto create a large offset between logic 1 and logic 0 voltage levels. These devices usually ran off a 15 volt power supply and were found in industrial control, where the high differential was intended to minimize the effect of noise.[3] P-type MOS (PMOS) logicusesp-channelMOSFETsto implementlogic gatesand otherdigital circuits.N-type MOS (NMOS) logicusesn-channelMOSFETs to implement logic gates and other digital circuits. For devices of equal current driving capability, n-channel MOSFETs can be made smaller than p-channel MOSFETs, due to p-channel charge carriers (holes) having lowermobilitythan do n-channel charge carriers (electrons); also, producing only one type of MOSFET on a silicon substrate is cheaper and technically simpler. These were the driving principles in the design ofNMOS logic, which uses n-channel MOSFETs exclusively. However, neglectingleakage current, NMOS logic consumes power even when no switching is taking place, unlike CMOS logic. The MOSFET invented at Bell Labs between 1955 and 1960 had both pMOS and nMOS devices with a20 μm process.[4][5][6][7][8]Their original MOSFET devices had a gate length of 20μmand agate oxidethickness of100 nm.[9]However, the nMOS devices were impractical, and only the pMOS type were practical working devices.[8]A more practical NMOS process was developed several years later. NMOS was initially faster thanCMOS, thus NMOS was more widely used for computers in the 1970s.[10]With advances in technology, CMOS logic displaced NMOS logic in the mid-1980s to become the preferred process for digital chips. ECL uses an overdrivenbipolar junction transistor(BJT) differential amplifier with single-ended input and limited emitter current. The ECL family, ECL is also known as current-mode logic (CML), was invented by IBM ascurrent steering logicfor use in thetransistorizedIBM 7030 Stretchcomputer, where it was implemented using discrete components. The first ECL logic family to be available in integrated circuits was introduced byMotorolaasMECLin 1962.[11] In TTL logic,bipolar junction transistors(BJTs) perform the logic and amplifying functions. The first transistor–transistor logic family of integrated circuits was introduced bySylvaniaasSylvania Universal High–Level Logic(SUHL) in 1963. Texas Instruments introduced the7400 seriesTTL family in 1964. Transistor–transistor logic usesbipolar transistorsto form its integrated circuits.[12]TTL has changed significantly over the years, with newer versions replacing the older types. Since the transistors of a standard TTL gate are saturated switches, minority carrier storage time in each junction limits the switching speed of the device. Variations on the basic TTL design are intended to reduce these effects and improve speed, power consumption, or both. The German physicistWalter H. Schottkyformulated a theory predicting theSchottky effect, which led to theSchottky diodeand laterSchottky transistors. For the same power dissipation, Schottky transistors have a faster switching speed than conventional transistors because the Schottky diode prevents the transistor from saturating and storing charge; seeBaker clamp. Logic gates built with Schottky transistors switch faster than TTL gates built with ordinary BJTs but consume more power. WithLow-power Schottky(LS), internal resistance values were increased to reduce power consumption and increase switching speed over the original version. The introduction ofAdvanced Low-power Schottky(ALS) further increased speed and reduced power consumption. A faster logic family calledFAST(Fairchild Advanced Schottky TTL) (Schottky) (F) was also introduced that was faster than original Schottky TTL. CMOS logic gates use complementary arrangements of enhancement-mode N-channel and P-channelfield effect transistor. Since the initial devices used oxide-isolated metal gates, they were calledCMOS(complementary metal–oxide–semiconductor logic). In contrast to TTL, CMOS uses almost no power in the static state (that is, when inputs are not changing). A CMOS gate draws no current other than leakage when in a steady 1 or 0 state. When the gate switches states, current is drawn from the power supply to charge the capacitance at the output of the gate. This means that the current draw of CMOS devices increases with switching rate (controlled by clock speed, typically). The first CMOS family of logic integrated circuits was introduced byRCAasCD4000 COS/MOS, the4000 series, in 1968. Initially CMOS logic was slower than LS-TTL. However, because the logic thresholds of CMOS were proportional to the power supply voltage, CMOS devices were well-adapted to battery-operated systems with simple power supplies. CMOS gates can also tolerate much wider voltage ranges than TTL gates because the logic thresholds are (approximately) proportional to power supply voltage, and not the fixed levels required by bipolar circuits. The required silicon area for implementing such digital CMOS functions has rapidly shrunk.VLSI technologyincorporating millions of basic logic operations onto one chip, almost exclusively uses CMOS. The extremely small capacitance of the on-chip wiring caused an increase in performance by several orders of magnitude. On-chip clock rates as high as 4 GHz have become common, approximately 1000 times faster than the technology by 1970. CMOS chips often work with a broader range of power supply voltages than other logic families. Early TTL ICs required apower supplyvoltageof 5V, but early CMOS could use 3 to 15V.[13]Lowering the supply voltage reduces the charge stored on any capacitances and consequently reduces the energy required for a logic transition. Reduced energy implies less heat dissipation. The energy stored in a capacitanceCand changingVvolts is ½CV2. By lowering the power supply from 5V to 3.3V, switching power was reduced by almost 60 percent (power dissipationis proportional to the square of the supply voltage). Many motherboards have avoltage regulator moduleto provide the even lower power supply voltages required by many CPUs. Because of the incompatibility of the CD4000 series of chips with the previous TTL family, a new standard emerged which combined the best of the TTL family with the advantages of the CD4000 family. It was known as the 74HC (which used anywhere from 3.3V to 5V power supplies (and used logic levels relative to the power supply)), and with devices that used 5V power supplies and TTLlogic levels. Interconnecting any two logic families often required special techniques such as additionalpull-up resistors, or purpose-built interface circuits, since the logic families may use differentvoltage levelsto represent 1 and 0 states, and may have other interface requirements only met within the logic family. TTL logic levels are different from those of CMOS – generally a TTL output does not rise high enough to be reliably recognized as a logic 1 by a CMOS input. This problem was solved by the invention of the 74HCT family of devices that uses CMOS technology but TTL input logic levels. These devices only work with a 5V power supply. They form a replacement for TTL, although HCT is slower than original TTL (HC logic has about the same speed as original TTL). Other CMOS circuit families withinintegrated circuitsincludecascode voltage switch logic(CVSL) andpass transistor logic(PTL) of various sorts. These are generally used "on-chip" and are not delivered as building-block medium-scale or small-scale integrated circuits.[14][15] One major improvement was to combine CMOS inputs and TTL drivers to form of a new type of logic devices calledBiCMOS logic, of which the LVT and ALVT logic families are the most important. The BiCMOS family has many members, includingABT logic,ALB logic,ALVT logic,BCT logicandLVT logic. With HC and HCT logic and LS-TTL logic competing in the market it became clear that further improvements were needed to create theideallogic device that combined high speed, with low power dissipation and compatibility with older logic families. A whole range of newer families has emerged that use CMOS technology. A short list of the most important family designators of these newer devices includes: There are many others includingAC/ACT logic,AHC/AHCT logic,ALVC logic,AUC logic,AVC logic,CBT logic,CBTLV logic,FCT logicandLVC logic(LVCMOS). The integrated injection logic (IIL or I2L) usesbipolar transistorsin a current-steering arrangement to implement logic functions.[16]It was used in some integrated circuits, but it is now considered obsolete.[17] The following logic families would either have been used to build up systems from functional blocks such as flip-flops, counters, and gates, or else would be used as "glue" logic to interconnect very-large scale integration devices such as memory and processors. Not shown are some early obscure logic families from the early 1960s such as DCTL (direct-coupled transistor logic), which did not become widely available. Propagation delayis the time taken for a two-input NAND gate to produce a result after a change of state at its inputs.Toggle speedrepresents the fastest speed at which a J-K flip flop could operate.Power per gateis for an individual 2-input NAND gate; usually there would be more than one gate per IC package. Values are very typical and would vary slightly depending on application conditions, manufacturer, temperature, and particular type of logic circuit.Introduction yearis when at least some of the devices of the family were available in volume for civilian uses. Some military applications pre-dated civilian use.[18][19] Several techniques and design styles are primarily used in designing large single-chip application-specific integrated circuits (ASIC) and CPUs, rather than generic logic families intended for use in multi-chip applications. These design styles can typically be divided into two main categories,static techniquesandclocked dynamic techniques. (Seestatic versus dynamic logicfor some discussion on the advantages and disadvantages of each category).
https://en.wikipedia.org/wiki/Logic_family
Indigital circuits, alogic levelis one of a finite number ofstatesthat adigital signalcan inhabit. Logic levels are usually represented by thevoltagedifference between the signal andground, although other standards exist. The range of voltage levels that represent each state depends on thelogic familybeing used. Alogic-level shiftercan be used to allow compatibility between different circuits. In binary logic the two levels arelogical highandlogical low, which generally correspond tobinary numbers1 and 0 respectively ortruth valuestrueandfalserespectively. Signals with one of these two levels can be used inBoolean algebrafor digital circuit design or analysis. The use of either the higher or the lower voltage level to represent either logic state is arbitrary. The two options areactive high(positive logic) andactive low(negative logic). Active-high and active-low states can be mixed at will: for example, aread only memoryintegrated circuit may have a chip-select signal that is active-low, but the data and address bits are conventionally active-high. Occasionally a logic design is simplified by inverting the choice of active level (seeDe Morgan's laws). The name of an active-low signal is historically written with a bar above it to distinguish it from an active-high signal. For example, the nameQ, readQ barorQ not, represents an active-low signal. The conventions commonly used are: Many control signals in electronics are active-low signals[2](usually reset lines, chip-select lines and so on). Logic families such asTTLcan sink more current than they can source, sofanoutandnoise immunityincrease. It also allows forwired-ORlogic if the logic gates areopen-collector/open-drainwith a pull-up resistor. Examples of this are theI²Cbus,CANbus, andPCIbus. Some signals have a meaning in both states and notation may indicate such. For example, it is common to have a read/write line designated R/W, indicating that the signal is high in case of a read and low in case of a write. The two logical states are usually represented by two different voltages, but two differentcurrentsare used in some logic signaling, likedigital current loop interfaceandcurrent-mode logic. High and low thresholds are specified for each logic family. When below the low threshold, the signal islow. When above the high threshold, the signal ishigh. Intermediate levels are undefined, resulting in highly implementation-specific circuit behavior. It is usual to allow some tolerance in the voltage levels used; for example, 0 to 2 volts might represent logic 0, and 3 to 5 volts logic 1. A voltage of 2 to 3 volts would be invalid and occur only in a fault condition or during a logic-level transition. However, few logic circuits can detect such a condition, and most devices will interpret the signal simply as high or low in an undefined or device-specific manner. Some logic devices incorporateSchmitt triggerinputs, whose behavior is much better defined in the threshold region and have increased resilience to small variations in the input voltage. The problem of the circuit designer is to avoid circumstances that produce intermediate levels, so that the circuit behaves predictably. Nearly all digital circuits use a consistent logic level for all internal signals. That level, however, varies from one system to another. Interconnecting any two logic families often required special techniques such as additionalpull-up resistorsor purpose-built interface circuits known as level shifters. Alevel shifterconnects one digital circuit that uses one logic level to another digital circuit that uses another logic level. Often two level shifters are used, one at each system: Aline driverconverts from internal logic levels to standard interface line levels; a line receiver converts from interface levels to internal voltage levels. For example,TTLlevels are different from those ofCMOS. Generally, a TTL output does not rise high enough to be reliably recognized as a logic 1 by a CMOS input, especially if it is only connected to a high-input-impedance CMOS input that does not source significant current. This problem was solved by the invention of the 74HCT family of devices that uses CMOS technology but TTL input logic levels. These devices only work with a 5 V power supply. Though rare,ternary computersevaluatebase 3three-valued or ternary logicusing 3 voltage levels. Inthree-state logic, an output device can be in one of three possible states: 0, 1, or Z, with the last meaninghigh impedance. This is not a voltage or logic level, but means that the output is not controlling the state of the connected circuit. Four valued logicadds a fourth state, X (don't care), meaning the value of the signal is unimportant and undefined. It means that an input is undefined, or an output signal may be chosen for implementation convenience (seeKarnaugh map § Don't cares). IEEE 1164defines 9 logic states for use inelectronic design automation. The standard includes strong and weakly driven signals, high impedance and unknown and uninitialized states. In solid-state storage devices, amulti-level cellstores data using multiple voltages. Storing n bits in one cell requires the device to reliably distinguish 2ndistinct voltage levels. Digitalline codesmay use more than two states to encode and transmit data more efficiently. Examples includealternate mark inversionand4B3Tfrom telecommunications, andpulse-amplitude modulationvariants used byEthernet over twisted pair. For instance,100BASE-TXusesMLT-3 encodingwith threedifferentialvoltage levels (−1V, 0V, +1V) while1000BASE-Tencodes data using five differential voltage levels (−1V, −0.5V, 0V, +0.5V, +1V).[8]Once received, the line coding is converted back to binary.
https://en.wikipedia.org/wiki/Logic_level
Magnetic logicisdigital logicmade using the non-linear properties of woundferrite cores.[1]Magnetic logic represents 0 and 1 by magnetising cores clockwise or anticlockwise.[2] Examples of magnetic logic includecore memory. Also, AND, OR, NOT and clocked shift logic gates can be constructed using appropriate windings, and the use of diodes. A complete computer called theALWAC 800was constructed using magnetic logic, but it was not commercially successful. TheElliott 803computer used a combination of magnetic cores (for logic function) and germanium transistors (as pulse amplifiers) for its CPU. It was a commercial success. William F. Steagall of theSperry-Rand corporationdeveloped the technology in an effort to improve the reliability of computers. In his patent application,[3]filed in 1954, he stated: "Where, as here, reliability of operation is a factor of prime importance, vacuum tubes, even though acceptable for most present-day electronic applications, are faced with accuracy requirements of an entirely different order of magnitude. For example, if two devices each having 99.5% reliability response are both utilized in a combined relationship in a given device, that device will have an accuracy or reliability factor of .995 × .995 = 99%. If ten such devices are combined, the factor drops to 95.1%. If, however, 500 such units are combined, the reliability factor of the device drops to 8.1%, and for a thousand, to 0.67%. It will thus be seen that even though the reliability of operation of individual vacuum tubes may be very much above 99.95%, where many thousands of units are combined, as in the large computers, the reliability factor of each unit must be extremely high to combine to produce an error free device. In practice of course such an ideal can only be approached. Magnetic amplifiers of the type here described meet the necessary requirements of reliability of performance for the combinations discussed." Magnetic logic was able to achieve switching speeds of about 1MHz but was overtaken by semiconductor based electronics which was able to switch much faster. Solid state semiconductors were able to increase their density according toMoore's Law, and thus proved more effective as IC technology developed. Magnetic logic has advantages in that it is not volatile, it may be powered down without losing its state.[1]
https://en.wikipedia.org/wiki/Magnetic_logic
NMOSornMOSlogic (from N-type metal–oxide–semiconductor) usesn-type(-)MOSFETs(metal–oxide–semiconductorfield-effect transistors) to implementlogic gatesand otherdigital circuits.[1][2] NMOS transistors operate by creating aninversion layerin ap-typetransistor body. This inversion layer, called the n-channel, can conductelectronsbetweenn-typesourceanddrainterminals. The n-channel is created by applying voltage to the third terminal, called thegate. Like other MOSFETs, nMOS transistors have four modes of operation: cut-off (or subthreshold), triode, saturation (sometimes called active), and velocity saturation. NMOS AND-by-default logic can produce unusual glitches or buggy behavior in NMOS components, such as the6502"illegal opcodes" which are absent in CMOS 6502s. In some cases such as Commodore'sVIC-IIchip, the bugs present in the chip's logic were extensively exploited by programmers for graphics effects. For many years, NMOS circuits were much faster than comparablePMOSandCMOScircuits, which had to use much slower p-channel transistors. It was also easier to manufacture NMOS than CMOS, as the latter has to implement p-channel transistors in special n-wells on the p-substrate, not prone to damage from bus conflicts, and not as vulnerable to electrostatic discharge damage. The major drawback with NMOS (and most otherlogic families) is that adirect currentmust flow through a logic gate even when the output is in asteady state(low in the case of NMOS). This means staticpower dissipation, i.e. power drain even when the circuit is not switching, leading to high power consumption. Another disadvantage of NMOS circuits is their thermal output. Due to the need to keep constant voltage running through the circuit to hold the transistors' states, NMOS circuits can generate a considerable amount of heat in operation which can reduce the device's reliability. This was especially problematic with the early large gate process nodes in the 1970s. CMOS circuits for contrast generate almost no heat unless the transistor count approaches 1 million. CMOS components were relatively uncommon in the 1970s-early 1980s and would typically be indicated with a "C" in the part number. Throughout the 1980s, both NMOS and CMOS parts were widely used with CMOS becoming more widespread as the decade went along. NMOS was preferred for components that performed active processing such as CPUs or graphics processors due to its higher speed and cheaper manufacturing cost as these were expensive compared to a passive component such as a memory chip, and some chips such as theMotorola 68030were hybrids with both NMOS and CMOS sections. CMOS has been near-universal in integrated circuits since the 1990s. Additionally, just like indiode–transistor logic,transistor–transistor logic,emitter-coupled logicetc., the asymmetric input logic levels make NMOS and PMOS circuits more susceptible to noise than CMOS. These disadvantages are whyCMOS logichas supplanted most of these types in most high-speed digital circuits such asmicroprocessorsdespite the fact that CMOS was originally very slow compared tologic gatesbuilt withbipolar transistors. MOS stands formetal-oxide-semiconductor, reflecting the way MOS-transistors were originally constructed, predominantly before the 1970s, with gates of metal, typically aluminium. Since around 1970, however, most MOS circuits have usedself-aligned gatesmade ofpolycrystalline silicon, a technology first developed byFederico FagginatFairchild Semiconductor. Thesesilicon gatesare still used in most types of MOSFET basedintegrated circuits, although metal gates (AlorCu) started to reappear in the early 2000s for certain types of high speed circuits, such as high performance microprocessors. The MOSFETs are n-typeenhancement modetransistors, arranged in a so-called "pull-down network" (PDN) between the logic gate output and negative supply voltage (typically the ground). Apull up(i.e. a "load" that can be thought of as a resistor, see below) is placed between the positive supply voltage and each logic gate output. Anylogic gate, including thelogical inverter, can then be implemented by designing a network of parallel and/or series circuits, such that if the desired output for a certain combination ofbooleaninput values iszero(orfalse), the PDN will be active, meaning that at least one transistor is allowing a current path between the negative supply and the output. This causes a voltage drop over the load, and thus a low voltage at the output, representing thezero. As an example, here is aNORgate implemented in schematic NMOS. If either input A or input B is high (logic 1, = True), the respective MOS transistor acts as a very low resistance between the output and the negative supply, forcing the output to be low (logic 0, = False). When both A and B are high, both transistors are conductive, creating an even lower resistance path to ground. The only case where the output is high is when both transistors are off, which occurs only when both A and B are low, thus satisfying the truth table of a NOR gate: A MOSFET can be made to operate as a resistor, so the whole circuit can be made with n-channel MOSFETs only. NMOS circuits are slow to transition from low to high. When transitioning from high to low, the transistors provide low resistance, and the capacitive charge at the output drains away very quickly (similar to discharging a capacitor through a very low resistor). But the resistance between the output and the positive supply rail is much greater, so the low to high transition takes longer (similar to charging a capacitor through a high value resistor). Using a resistor of lower value will speed up the process but also increases static power dissipation. However, a better (and the most common) way to make the gates faster is to usedepletion-modetransistors instead ofenhancement-modetransistors as loads. This is calleddepletion-load NMOS logic.
https://en.wikipedia.org/wiki/NMOS_logic
Theparametronis alogic circuitelement invented byEiichi Gotoin 1954.[1]The parametron is essentially aresonant circuitwith a nonlinear reactive element which oscillates at half the driving frequency.[2]The oscillation can be made to represent a binary digit by the choice between two stationary phases π radians (180 degrees) apart.[3] Parametrons were used in earlyJapanesecomputersfrom 1954 through the early 1960s. A prototype parametron-based computer, thePC-1, was built at theUniversity of Tokyoin 1958. Parametrons were used in early Japanese computers due to being reliable and inexpensive but were ultimately surpassed bytransistorsdue to differences in speed.[4] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Parametron
Processor designis a subfield ofcomputer scienceandcomputer engineering(fabrication) that deals with creating aprocessor, a key component ofcomputer hardware. The design process involves choosing aninstruction setand a certain execution paradigm (e.g.VLIWorRISC) and results in amicroarchitecture, which might be described in e.g.VHDLorVerilog. Formicroprocessordesign, this description is then manufactured employing some of the varioussemiconductor device fabricationprocesses, resulting in adiewhich is bonded onto achip carrier. This chip carrier is then soldered onto, or inserted into asocketon, aprinted circuit board(PCB). The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values usingregisters, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow. Processor designs are often tested and validated on one or several FPGAs before sending the design of the processor to a foundry forsemiconductor fabrication.[1] CPU design is divided into multiple components. Information is transferred throughdatapaths(such asALUsandpipelines). These datapaths are controlled through logic bycontrol units.Memorycomponents includeregister filesandcachesto retain information, or certain actions.Clock circuitrymaintains internal rhythms and timing through clock drivers,PLLs, andclock distribution networks. Pad transceiver circuitry which allows signals to be received and sent and alogic gatecelllibrarywhich is used to implement the logic. Logic gates are the foundation for processor design as they are used to implement most of the processor's components.[2] CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency,power-dissipation, and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them asintellectual property. Control logic implementation techniques (logic synthesisusing CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic,finite-state machines,microprogramming(common from 1965 to 1985), andProgrammable logic arrays(common in the 1980s, no longer common). Device types used to implement the logic include: A CPU design project generally has these major tasks: Re-designing a CPU core to a smaller die area helps to shrink everything (a "photomaskshrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have lessparasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within onevery-large-scale integrationchip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost. As with most complex electronic designs, thelogic verificationeffort (proving that the design does not have bugs) now dominates the project schedule of a CPU. Key CPU architectural innovations includeindex register,cache,virtual memory,instruction pipelining,superscalar,CISC,RISC,virtual machine,emulators,microprogram, andstack. A variety ofnew CPU design ideashave been proposed, includingreconfigurable logic,clockless CPUs,computational RAM, andoptical computing. Benchmarkingis a way of testing CPU speed. Examples include SPECint andSPECfp, developed byStandard Performance Evaluation Corporation, and ConsumerMark developed by the Embedded Microprocessor Benchmark ConsortiumEEMBC. Some of the commonly used metrics include: There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa. There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets. As of 2010[update], in the general-purpose computing market, that is, desktop, laptop, and server computers commonly used in businesses and homes, the IntelIA-32and the 64-bit versionx86-64architecture dominate the market, with its rivalsPowerPCandSPARCmaintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops.[5] Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption. In 1984, most high-performance CPUs required four to five years to develop.[6] Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs. As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors. These single-function devices differ from the more familiar general-purpose CPUs in several ways: The embedded CPU family with the largest number of total units shipped is the8051, averaging nearly a billion units per year.[7]The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.4730 square millimeters of silicon.[8][9] As of 2009, more CPUs are produced using theARM architecture familyinstruction sets than any other 32-bit instruction set.[10][11]The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time.[12] The 32-bitParallax Propellermicrocontroller architecture and the first chip were designed by two people in about 10 human years of work time.[13] The 8-bitAVR architectureand first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology. The 8-bit 6502 architecture and the firstMOS Technology 6502chip were designed in 13 months by a group of about 9 people.[14] The 32-bitBerkeley RISCI and RISC II processors were mostly designed by a series of students as part of a four quarter sequence of graduate courses.[15]This design became the basis of the commercialSPARCprocessor design. For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of7400 seriesintegrated circuits. One team of 4 students designed and built a simple 32 bit CPU during that semester.[16] Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester.[17] The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time.[18]24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU.[19] For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented bylogic synthesistechniques. These synthesized processors can be implemented in a much shorter amount of time, giving quickertime-to-market.
https://en.wikipedia.org/wiki/Processor_design
Aprogrammable logic controller(PLC) orprogrammable controlleris an industrialcomputerthat has beenruggedizedand adapted for the control of manufacturing processes, such asassembly lines, machines,roboticdevices, or any activity that requires high reliability, ease of programming, and process fault diagnosis. PLCs can range from small modular devices with tens ofinputs and outputs(I/O), in a housing integral with the processor, to large rack-mounted modular devices with thousands of I/O, and which are often networked to other PLC andSCADAsystems.[1]They can be designed for many arrangements of digital and analog I/O, extended temperature ranges, immunity toelectrical noise, and resistance to vibration and impact. PLCs were first developed in the automobile manufacturing industry to provide flexible, rugged and easily programmable controllers to replace hard-wiredrelay logicsystems.Dick Morley, who invented the first PLC, the Modicon 084, forGeneral Motorsin 1968, is considered the father of PLC. A PLC is an example of ahard real-timesystem since output results must be produced in response to input conditions within a limited time, otherwise unintended operation may result. Programs to control machine operation are typically stored in battery-backed-up ornon-volatile memory. The PLC originated in the late 1960s in the automotive industry in the US and was designed to replace relay logic systems.[2]Before, control logic for manufacturing was mainly composed ofrelays,cam timers,drum sequencers, and dedicatedclosed-loop controllers.[3] The hard-wired nature of these components made it difficult for design engineers to alter the automation process. Changes would require rewiring and careful updating of the documentation. Troubleshooting was a tedious process.[4]When general-purpose computers became available, they were soon applied to control logic in industrial processes. These early computers were unreliable[5]and required specialist programmers and strict control of working conditions, such as temperature, cleanliness, and power quality.[6] The PLC provided several advantages over earlier automation systems. It was designed to tolerate the industrial environment better than systems intended for office use, and was more reliable, compact, and required less maintenance than relay systems. It was easily expandable with additional I/O modules. While relay systems required tedious and sometimes complicated hardware changes in case of reconfiguration, a PLC can be reconfigured by loading new or modified code. This allowed for easier iteration over manufacturing process design. With a simple programming language focused on logic and switching operations, it was more user-friendly than computers usinggeneral-purpose programming languages. Early PLCs were programmed inladder logic, which strongly resembled a schematic diagram ofrelay logic. It also permitted its operation to be monitored.[7][8] In 1968, GM Hydramatic, theautomatic transmissiondivision ofGeneral Motors, issued arequest for proposalsfor an electronic replacement for hard-wired relay systems based on a white paper written by engineer Edward R. Clark. The winning proposal came from Bedford Associates fromBedford, Massachusetts. The result, built in 1969, was the first PLC and designated the084, because it was Bedford Associates' eighty-fourth project.[9][10] Bedford Associates started a company dedicated to developing, manufacturing, selling, and servicing this new product, which they namedModicon(standing for modular digital controller). One of the people who worked on that project wasDick Morley, who is considered to be the father of the PLC.[11]The Modicon brand was sold in 1977 toGould Electronicsand later toSchneider Electric, its current owner.[10]About this same time, Modicon createdModbus, a data communications protocol used with its PLCs. Modbus has since become a standard open protocol commonly used to connect many industrial electrical devices.[12] One of the first 084 models built is now on display at Schneider Electric's facility inNorth Andover, Massachusetts. It was presented to Modicon byGM, when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until after the 984 made its appearance.[13] In a parallel development, Odo Josef Struger is sometimes known as the "father of the programmable logic controller" as well.[11]He was involved in the invention of theAllen-Bradleyprogrammable logic controller[14][15][16]and is credited with coining the PLC acronym.[11][14]Allen-Bradley (now a brand owned byRockwell Automation) became a major PLC manufacturer in the United States during his tenure.[17]Struger played a leadership role in developingIEC 61131-3PLC programming language standards.[11] Many early PLC programming applications were not capable of graphical representation of the logic, and so it was instead represented as a series of logic expressions in some kind of Boolean format, similar toBoolean algebra. As programming terminals evolved, because ladder logic was a familiar format used for electro-mechanical control panels, it became more commonly used. Newer formats, such as state logic,[18]function block diagrams, andstructured textexist. Ladder logic remains popular because PLCs solve the logic in a predictable and repeating sequence, and ladder logic allows the person writing the logic to see any issues with the timing of the logic sequence more easily than would be possible in other formats.[19] Up to the mid-1990s, PLCs were programmed using proprietary programming panels or special-purpose programmingterminals, which often had dedicated function keys representing the various logical elements of PLC programs.[9]Some proprietary programming terminals displayed the elements of PLC programs as graphic symbols, but plainASCIIcharacter representations of contacts, coils, and wires were common. Programs were stored oncassette tape cartridges. Facilities for printing and documentation were minimal due to a lack of memory capacity. The oldest PLCs usedmagnetic-core memory.[20] A PLC is an industrial microprocessor-based controller with programmable memory used to store program instructions and various functions.[21]It consists of: PLCs require a programming device which is used to develop and later download the created program into the memory of the controller.[22] Modern PLCs generally contain areal-time operating system, such asOS-9orVxWorks.[23] There are two types of mechanical design for PLC systems. Asingle box(also called abrick) is a small programmable controller that fits all units and interfaces into one compact casing, although, typically, additional expansion modules for inputs and outputs are available. The second design type – amodularPLC – has a chassis (also called arack) that provides space for modules with different functions, such as power supply, processor, selection of I/O modules and communication interfaces – which all can be customized for the particular application.[24]Several racks can be administered by a single processor and may have thousands of inputs and outputs. Either a special high-speed serial I/O link or comparable communication method is used so that racks can be distributed away from the processor, reducing the wiring costs for large plants.[citation needed] Discrete (digital) signalscan only takeonoroffvalue (1 or 0,trueorfalse). Examples of devices providing a discrete signal includelimit switchesandphotoelectric sensors.[25] Analog signalscan use voltage or current that is analogous to the monitored variable and can take any value within their scale. Pressure, temperature, flow, and weight are often represented by analog signals. These are typically interpreted as integer values with various ranges of accuracy depending on the device and the number of bits available to store the data.[25]For example, an analog 0 to 10 V or 4-20 mAcurrent loopinput would beconvertedinto an integer value of 0 to 32,767. The PLC will take this value and translate it into the desired units of the process so the operator or program can read it. Some special processes need to work permanently with minimum unwanted downtime. Therefore, it is necessary to design a system that isfault tolerant. In such cases, to increase the system availability in the event of hardware component failure,redundantCPU or I/O modules with the same functionality can be added to a hardware configuration to prevent a total or partialprocess shutdowndue to hardware failure. Other redundancy scenarios could be related to safety-critical processes, for example, large hydraulic presses could require that two PLCs turn on output before the press can come down in case one PLC does not behave properly. Programmable logic controllers are intended to be used by engineers without a programming background. For this reason, a graphical programming language calledladder logicwas first developed. It resembles the schematic diagram of a system built with electromechanical relays and was adopted by many manufacturers and later standardized in theIEC 61131-3control systems programming standard. As of 2015,[update]it is still widely used, thanks to its simplicity.[26] As of 2015,[update]the majority of PLC systems adhere to theIEC 61131-3standard that defines 2 textual programming languages:Structured Text(similar toPascal) andInstruction List; as well as 3 graphical languages:ladder logic,function block diagramandsequential function chart.[26][27]Instruction List was deprecated in the third edition of the standard.[28] Modern PLCs can be programmed in a variety of ways, from the relay-derived ladder logic to programming languages such as specially adapted dialects ofBASICandC.[29] While the fundamental concepts of PLC programming are common to all manufacturers, differences inI/O addressing,memory organization, andinstruction setsmean that PLC programs are never perfectly interchangeable between different makers. Even within the same product line of a single manufacturer, different models may not be directly compatible.[30] Manufacturers develop programming software for their PLCs. In addition to being able to program PLCs in multiple languages, they provide common features like hardware diagnostics and maintenance, software debugging, and offline simulation.[31] PLC programs are typically written in a programming device, which can take the form of a desktop console, special software on apersonal computer, or a handheld device.[31]The program is then downloaded to the PLC through a cable connection or over a network. It is stored either in non-volatileflash memoryor battery-backed-upRAMon the PLC. In some PLCs, the program is transferred from the programming device using a programming board that writes the program into a removable chip, such asEPROMthat is then inserted into the PLC. An incorrectly programmed PLC can result in lost productivity and dangerous conditions for programmed equipment. PLC simulation is a feature often found in PLC programming software. It allows for testing anddebuggingearly in a project's development. Testing the project in simulation improves its quality, increases the level of safety associated with equipment and can save time during the installation and commissioning of automated control applications since many scenarios can be tried and tested before the system is activated.[31][32] The main difference compared to most other computing devices is that PLCs are intended for and therefore tolerant of more severe environmental conditions (such as dust, moisture, heat, cold), while offering extensiveinput/output(I/O) to connect the PLC tosensorsandactuators. PLC input can include simple digital elements such aslimit switches, analog variables from process sensors (such as temperature and pressure), and more complex data such as that from positioning ormachine visionsystems.[33]PLC output can include elements such as indicator lamps, sirens,electric motors,pneumaticorhydrauliccylinders, magneticrelays,solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a fieldbus or computer network that plugs into the PLC. The functionality of the PLC has evolved over the years to include sequential relay control, motion control,process control,distributed control systems, andnetworking. The data handling, storage, processing power, and communication capabilities of some modern PLCs are approximately equivalent todesktop computers. PLC-like programming combined with remote I/O hardware, allows a general-purpose desktop computer to overlap some PLCs in certain applications. Desktop computer controllers have not been generally accepted in heavy industry because desktop computers run on less stable operating systems than PLCs, and because the desktop computer hardware is typically not designed to the same levels of tolerance to temperature, humidity, vibration, and longevity as the processors used in PLCs. Operating systems such as Windows do not lend themselves to deterministic logic execution, with the result that the controller may not always respond to changes of input status with the consistency in timing expected from PLCs. Desktop logic applications find use in less critical situations, such as laboratory automation and use in small facilities where the application is less demanding and critical.[citation needed] The most basic function of a programmable logic controller is to emulate the functions of electromechanical relays. Discrete inputs are given a unique address, and a PLC instruction can test if the input state is on or off. Just as a series of relay contacts perform a logical AND function, not allowing current to pass unless all the contacts are closed, so a series of "examine if on" instructions will energize its output storage bit if all the input bits are on. Similarly, a parallel set of instructions will perform a logical OR. In an electromechanical relay wiring diagram, a group of contacts controlling one coil is called a "rung" of a "ladder diagram", and this concept is also used to describe PLC logic. Some models of PLC limit the number of series and parallel instructions in one "rung" of logic. The output of each rung sets or clears a storage bit, which may be associated with a physical output address or which may be an "internal coil" with no physical connection. Such internal coils can be used, for example, as a common element in multiple separate rungs. Unlike physical relays, there is usually no limit to the number of times an input, output or internal coil can be referenced in a PLC program. Some PLCs enforce a strict left-to-right, top-to-bottom execution order for evaluating the rung logic. This is different from electro-mechanical relay contacts, which, in a sufficiently complex circuit, may either pass current left-to-right or right-to-left, depending on the configuration of surrounding contacts. The elimination of these "sneak paths" is either a bug or a feature, depending on the programming style. More advanced instructions of the PLC may be implemented as functional blocks, which carry out some operation when enabled by a logical input and which produce outputs to signal, for example, completion or errors, while manipulating variables internally that may not correspond to discrete logic. PLCs use built-in ports, such asUSB,Ethernet,RS-232,RS-485, orRS-422to communicate with external devices (sensors, actuators) and systems (programming software,SCADA,user interface). Communication is carried over various industrial network protocols, likeModbus, orEtherNet/IP. Many of these protocols are vendor specific. PLCs used in larger I/O systems may havepeer-to-peer(P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for user interface devices such as keypads orPC-type workstations. Formerly, some manufacturers offered dedicated communication modules as an add-on function where the processor had no network connection built-in. PLCs may need to interact with people for the purpose of configuration, alarm reporting, or everyday control. Ahuman-machine interface(HMI) is employed for this purpose. HMIs are also referred to as man-machine interfaces (MMIs) and graphical user interfaces (GUIs). A simple system may use buttons and lights to interact with the user. Text displays are available as well as graphical touch screens. More complex systems use programming and monitoring software installed on a computer, with the PLC connected via a communication interface. A PLC works in a program scan cycle, where it executes its program repeatedly. The simplest scan cycle consists of 3 steps: The program follows the sequence of instructions. It typically takes a time span of tens of milliseconds for the processor to evaluate all the instructions and update the status of all outputs.[35]If the system contains remote I/O—for example, an external rack with I/O modules—then that introduces additional uncertainty in the response time of the PLC system.[34] As PLCs became more advanced, methods were developed to change the sequence of ladder execution, and subroutines were implemented.[36] Special-purpose I/O modules may be used where the scan time of the PLC is too long to allow predictable performance. Precision timing modules, or counter modules for use withshaft encoders, are used where the scan time would be too long to reliably count pulses or detect the sense of rotation of an encoder. This allows even a relatively slow PLC to still interpret the counted values to control a machine, as the accumulation of pulses is done by a dedicated module that is unaffected by the speed of program execution.[37] In his book from 1998, E. A. Parr pointed out that even though most programmable controllers require physical keys and passwords, the lack of strict access control and version control systems, as well as an easy-to-understand programming language make it likely that unauthorized changes to programs will happen and remain unnoticed.[38] Prior to the discovery of theStuxnetcomputer wormin June 2010, the security of PLCs received little attention. Modern programmable controllers generally contain real-time operating systems, which can be vulnerable to exploits in a similar way as desktop operating systems, likeMicrosoft Windows. PLCs can also be attacked by gaining control of a computer they communicate with.[23]Since 2011,[update]these concerns have grown – networking is becoming more commonplace in the PLC environment, connecting the previously separated plant floor networks and office networks.[39] In February 2021,Rockwell Automationpublicly disclosed a critical vulnerability affecting its Logix controllers family. Thesecret cryptographic keyused toverify communicationbetween the PLC and workstation could be extracted from the programming software (Studio 5000 Logix Designer) and used to remotely change program code and configuration of a connected controller. The vulnerability was given a severity score of 10 out of 10 on theCVSS vulnerability scale. At the time of writing, the mitigation of the vulnerability was tolimit network access to affected devices.[40][41] Safety PLCs can be either a standalone device or asafety-ratedhardware and functionality added to existing controller architectures (Allen-BradleyGuardLogix,SiemensF-series, etc.). These differ from conventional PLC types by being suitable for safety-critical applications for which PLCs have traditionally been supplemented with hard-wiredsafety relaysand areas of the memory dedicated to the safety instructions. The standard of safety level is theSIL. A safety PLC might be used to control access to arobot cellwithtrapped-key access, or to manage the shutdown response to an emergency stop button on a conveyor production line. Such PLCs typically have a restricted regular instruction set augmented with safety-specific instructions designed to interface with emergency stop buttons, light screens, and other safety-related devices. The flexibility that such systems offer has resulted in rapid growth of demand for these controllers.[citation needed] PLCs are well adapted to a range ofautomationtasks. These are typically industrial processes in manufacturing where the cost of developing and maintaining the automation system is high relative to the total cost of the automation, and where changes to the system would be expected during its operational life. PLCs contain input and output devices compatible with industrial pilot devices and controls; little electrical design is required, and the design problem centers on expressing the desired sequence of operations. PLC applications are typically highly customized systems, so the cost of a packaged PLC is low compared to the cost of a specific custom-built controller design. On the other hand, in the case of mass-produced goods, customized control systems are economical. This is due to the lower cost of the components, which can be optimally chosen instead of a "generic" solution, and where the non-recurring engineering charges are spread over thousands or millions of units.[citation needed] Programmable controllers are widely used in motion, positioning, or torque control. Some manufacturers produce motion control units to be integrated with PLC so thatG-code(involving aCNCmachine) can be used to instruct machine movements.[citation needed] These are for small machines and systems with low or medium volume. They can execute PLC languages such as Ladder, Flow-Chart/Grafcet, etc. They are similar to traditional PLCs, but their small size allows developers to design them into custom printed circuit boards like a microcontroller, without computer programming knowledge, but with a language that is easy to use, modify and maintain. They sit between the classic PLC / micro-PLC and microcontrollers.[citation needed] Amicrocontroller-based design would be appropriate where hundreds or thousands of units will be produced and so the development cost (design of power supplies, input/output hardware, and necessary testing and certification) can be spread over many sales, and where the end-user would not need to alter the control. Automotive applications are an example; millions of units are built each year, and very few end-users alter the programming of these controllers. However, some specialty vehicles such as transit buses economically use PLCs instead of custom-designed controls, because the volumes are low and the development cost would be uneconomical.[42] Very complex process control, such as those used in the chemical industry, may require algorithms and performance beyond the capability of even high-performance PLCs. Very high-speed or precision controls may also require customized solutions; for example, aircraft flight controls.Single-board computersusing semi-customized or fully proprietary hardware may be chosen for very demanding control applications where the high development and maintenance cost can be supported. "Soft PLCs" running on desktop-type computers can interface with industrial I/O hardware while executing programs within a version of commercial operating systems adapted for process control needs.[42] The rising popularity ofsingle board computershas also had an influence on the development of PLCs. Traditional PLCs are generallyclosed platforms, but some newer PLCs (e.g. groov EPIC fromOpto 22, ctrlX fromBosch Rexroth, PFC200 fromWago, PLCnext fromPhoenix Contact, and Revolution Pi from Kunbus) provide the features of traditional PLCs on anopen platform. In more recent years,[when?]small products called programmable logic relays (PLRs) or smart relays, have become more common and accepted. These are similar to PLCs and are used in light industries where only a few points of I/O are needed, and low cost is desired. These small devices are typically made in a common physical size and shape by several manufacturers and branded by the makers of larger PLCs to fill their low-end product range. Most of these have 8 to 12 discrete inputs, 4 to 8 discrete outputs, and up to 2 analog inputs. Most such devices include a tinypostage stamp-sized LCD screen for viewing simplified ladder logic (only a very small portion of the program being visible at a given time) and status of I/O points, and typically these screens are accompanied by a 4-way rocker push-button plus four more separate push-buttons, similar to the key buttons on aVCRremote control, and used to navigate and edit the logic. Most have anRS-232orRS-485port for connecting to a PC so that programmers can use user-friendly software for programming instead of the small LCD and push-button set for this purpose. Unlike regular PLCs that are usually modular and greatly expandable, the PLRs are usually not modular or expandable, but their cost can be significantly lower than that a PLC, and they still offer robust design and deterministic execution of the logic. A variant of PLCs, used in remote locations is theremote terminal unitor RTU. An RTU is typically a low power, ruggedized PLC whose key function is to manage the communications links between the site and the central control system (typicallySCADA) or in some modern systems, "The Cloud". Unlike factory automation using wired communication protocols such asEthernet, communications links to remote sites are often radio-based and are less reliable. To account for the reduced reliability, RTU will buffer messages or switch to alternate communications paths. When buffering messages, the RTU will timestamp each message so that a full history of site events can be reconstructed. RTUs, being PLCs, have a wide range of I/O and are fully programmable, typically with languages from theIEC 61131-3standard that is common to many PLCs, RTUs and DCSs. In remote locations, it is common to use an RTU as a gateway for a PLC, where the PLC is performing all site control and the RTU is managing communications, time-stamping events and monitoring ancillary equipment. On sites with only a handful of I/O, the RTU may also be the site PLC and will perform both communications and control functions.
https://en.wikipedia.org/wiki/Programmable_logic_controller
Aprogrammable logic device(PLD) is anelectroniccomponent used to buildreconfigurabledigital circuits. Unlike digital logic constructed using discretelogic gateswith fixed functions, the function of a PLD is undefined at the time of manufacture. Before the PLD can be used in a circuit it must be programmed to implement the desired function.[1]Compared to fixed logic devices, programmable logic devices simplify the design of complex logic and may offer superior performance.[2]Unlike formicroprocessors, programming a PLD changes the connections made between the gates in the device. PLDs can broadly be categorised into, in increasing order of complexity,simple programmable logic devices (SPLDs), comprisingprogrammable array logic,programmable logic arrayandgeneric array logic;complex programmable logic devices (CPLDs); andfield-programmable gate arrays (FPGAs). In 1969,Motorolaoffered the XC157, a mask-programmed gate array with 12 gates and 30 uncommitted input/output pins.[3] In 1970,Texas Instrumentsdeveloped a mask-programmable IC based on theIBMread-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8JK flip-flopsfor memory. TI coined the termprogrammable logic array(PLA) for this device.[4] In 1971,General ElectricCompany (GE) was developing a programmable logic device based on the newprogrammable read-only memory(PROM) technology. This experimental device improved on IBM's ROAM by allowing multilevel logic. Intel had just introduced the floating-gateUV EPROMso the researcher at GE incorporated that technology. The GE device was the first erasable PLD ever developed, predating theAlteraEPLD by over a decade. GE obtained several early patents on programmable logic devices.[5][6][7] In 1973National Semiconductorintroduced a mask-programmable PLA device (DM7575) with 14 inputs and 8 outputs with no memory registers. This was more popular than the TI part but the cost of making the metal mask limited its use. The device is significant because it was the basis for the field programmable logic array produced bySigneticsin 1975, the 82S100. (Intersilactually beat Signetics to market but poor yield doomed their part.)[8][9] In 1974 GE entered into an agreement withMonolithic Memories(MMI) to develop a mask-programmable logic device incorporating the GE innovations. The device was namedprogrammable associative logic arrayor PALA. The MMI 5760 was completed in 1976 and could implement multilevel or sequential circuits of over 100 gates. The device was supported by a GE design environment where Boolean equations would be converted to mask patterns for configuring the device. The part was never brought to market.[10] In 1970,Texas Instrumentsdeveloped a mask-programmable IC based on theIBMread-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8JK flip-flopsfor memory. TI coined the termprogrammable logic arrayfor this device.[4] A programmable logic array (PLA) has a programmable AND gate array, which links to a programmable OR gate array, which can then be conditionally complemented to produce an output. A PLA is similar to a ROM concept, however a PLA does not provide full decoding of a variable and does not generate all themintermsas in a ROM. PAL devices have arrays of transistor cells arranged in a "fixed-OR, programmable-AND" plane used to implement "sum-of-products" binary logic equations for each of the outputs in terms of the inputs and either synchronous or asynchronous feedback from the outputs. MMI introduced a breakthrough device in 1978, theprogrammable array logicor PAL. The architecture was simpler than that of Signetics' FPLA because it omitted the programmable OR array. This made the parts faster, smaller and cheaper. They were available in 20-pin 300-mil DIP packages, while the FPLAs came in 28-pin 600-mil packages. The PAL Handbook demystified the design process. The PALASM design software (PAL assembler) converted the engineers' Boolean equations into the fuse pattern required to program the part. The PAL devices were soonsecond-sourcedby National Semiconductor, Texas Instruments and AMD. After MMI succeeded with the 20-pin PAL parts,AMDintroduced the 24-pin22V10PAL with additional features. After buying out MMI (1987), AMD spun off a consolidated operation asVantis, and that business was acquired byLattice Semiconductorin 1999. An improvement on the PAL was the generic array logic device, or GAL, invented byLattice Semiconductorin 1985. This device has the same logical properties as the PAL but can be erased and reprogrammed. The GAL is very useful in the prototyping stage of a design when anybugsin the logic can be corrected by reprogramming. GALs are programmed and reprogrammed using a PAL programmer, or, in the case of chips that support it, by using thein-circuit programmingtechnique. Lattice GALs combineCMOSand electrically erasable (E2) floating gate technology for a high-speed, low-power logic device. A similar device called a PEEL (programmable electrically erasable logic) was introduced by the International CMOS Technology (ICT) corporation. Sometimes GAL chips are referred as simple programmable logic device (SPLD), analogous to complex programmable logic device (CPLD) below. PALs and GALs are available only in small sizes, equivalent to a few hundred logic gates. For bigger logic circuits, complex PLDs orCPLDscan be used. These contain the equivalent of several PALs linked by programmable interconnections, all in oneintegrated circuit. CPLDs can replace thousands, or even hundreds of thousands, of logic gates. Some CPLDs are programmed using a PAL programmer, but this method becomes inconvenient for devices with hundreds of pins. A second method of programming is to solder the device to its printed circuit board, then feed it with a serial data stream from a personal computer. The CPLD contains a circuit that decodes the data stream and configures the CPLD to perform its specified logic function. Some manufacturers, such asAlteraandAtmel (now Microchip), useJTAGto program CPLDs in-circuit from.JAMfiles. While PALs were being developed into GALs and CPLDs (all discussed above), a separate stream of development was happening. This type of device is based ongate arraytechnology and is called thefield-programmable gate array(FPGA). Early examples of FPGAs are the 82S100 array, and 82S105 sequencer, by Signetics, introduced in the late 1970s. The 82S100 was an array of AND terms. The 82S105 also had flip-flop functions. (Remark: 82S100 and similar ICs from Signetics have PLA structure, AND-plane + OR-plane.) FPGAs use a grid oflogic gates, and once stored, the data doesn't change, similar to that of an ordinary gate array. The termfield-programmablemeans the device is programmed by the customer, not the manufacturer. FPGAs and gate arrays are similar but gate arrays can only be configured at the factory during fabrication.[11][12][13] FPGAs are usually programmed after being soldered down to the circuit board, in a manner similar to that of larger CPLDs. In most larger FPGAs, the configuration is volatile and must be re-loaded into the device whenever power is applied or different functionality is required. Configuration is typically stored in a configurationPROM,EEPROMor flash memory.[14]EEPROM versions may be in-system programmable (typically viaJTAG). The difference between FPGAs and CPLDs is that FPGAs are internally based onlook-up tables(LUTs), whereas CPLDs form the logic functions with sea-of-gates (e.g.sum of products). CPLDs are meant for simpler designs while FPGAs are meant for more complex designs. In general, CPLDs are a good choice for widecombinational logicapplications, whereas FPGAs are more suitable for largestate machinessuch asmicroprocessors. Using the same technology asEPROMs,EPLDs have a quartz window in the package that allows them to be erased on exposure to UV light.[15][16] Using the same technology asEEPROMs,EEPLDscan be erased electrically.[15][16] Anerasable programmable logic device(EPLD) is an integrated circuit that comprises an array of PLDs that do not come pre-connected; the connections are programmed electrically by the user. Most GAL and FPGA devices are examples of EPLDs.[citation needed] These are microprocessor circuits that contain somefixed functionsand other functions that can be altered by code running on the processor. Designing self-altering systems requires that engineers learn new methods and that new software tools be developed. PLDs are being sold now that contain a microprocessor with a fixed function (the so-calledcore) surrounded by programmable logic. These devices let designers concentrate on adding new features to designs without having to worry about making the microprocessor work. Also, the fixed-function microprocessor takes less space on the chip than a part of the programmable gate array implementing the same processor, leaving more space for the programmable gate array to contain the designer's specialized circuits. A PLD is a combination of a logic device and amemorydevice. The memory is used to store the pattern that was given to the chip during programming. Most of the methods for storing data in an integrated circuit have been adapted for use in PLDs. These include: Silicon antifuses are connections that are made by applying a voltage across a modified area of silicon inside the chip. They are calledantifusesbecause they work in the opposite way to normal fuses, which begin life as connections until they are broken by an electric current. SRAM, or static RAM, is a volatile type of memory, meaning that its contents are lost each time the power is switched off. SRAM-based PLDs therefore have to be programmed every time the circuit is switched on. This is usually done automatically by another part of the circuit. An EPROM memory cell is aMOSFET(metal-oxide-semiconductor field-effect transistor, or MOS transistor) that can be switched on by trapping an electric charge permanently on its gate electrode. This is done by a PAL programmer. The charge remains for many years and can only be removed by exposing the chip to strongultravioletlight in a device called an EPROM eraser. Flash memory is non-volatile, retaining its contents even when the power is switched off. It is stored onfloating-gate MOSFETmemory cells, and can be erased and reprogrammed as required. This makes it useful in PLDs that may be reprogrammed frequently, such as PLDs used in prototypes. Flash memory is a kind of EEPROM that holds information using trapped electric charges similar to EPROM. Consequently, flash memory can hold information for years, but possibly not as many years as EPROM. As of 2005, most CPLDs are electrically programmable and erasable, and non-volatile. This is because they are too small to justify the inconvenience of programming internal SRAM cells every time they start up, and EPROM cells are more expensive due to their ceramic package with a quartz window. Many PAL programming devices accept input in a standard file format, commonly referred to as 'JEDECfiles'. They are analogous tosoftwarecompilers. The languages used assource codefor logic compilers are calledhardware description languages, or HDLs.[1] PALASM,ABELandCUPLare frequently used for low-complexity devices, whileVerilogandVHDLare popular higher-level description languages for more complex devices. The more limited ABEL is often used for historical reasons, but for new designs, VHDL is more popular, even for low-complexity designs. For modern PLD programming languages, design flows, and tools, seeFPGAandreconfigurable computing. Adevice programmeris used to transfer the Boolean logic pattern into the programmable device. In the early days of programmable logic, every PLD manufacturer also produced a specialized device programmer for its family of logic devices. Later, universal device programmers came onto the market that supported several logic device families from different manufacturers. Today's device programmers usually can program common PLDs (mostly PAL/GAL equivalents) from all existing manufacturers. Common file formats used to store the Boolean logic pattern (fuses) are JEDEC, Altera POF (programmable object file), or Xilinx BITstream.[17]
https://en.wikipedia.org/wiki/Programmable_logic_device
Arace conditionorrace hazardis the condition of anelectronics,software, or othersystemwhere the system's substantive behavior isdependenton the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes abugwhen one or more of the possible behaviors is undesirable. The termrace conditionwas already in use by 1954, for example inDavid A. Huffman's doctoral thesis "The synthesis of sequential switching circuits".[1] Race conditions can occur especially inlogic circuitsormultithreadedordistributedsoftware programs. Usingmutual exclusioncan prevent race conditions in distributed software systems. A typical example of a race condition may occur when alogic gatecombines signals that have traveled along different paths from the same source. The inputs to the gate can change at slightly different times in response to a change in the source signal. The output may, for a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate suchglitchesbut if this output functions as aclock signalfor further systems that contain memory, for example, the system can rapidly depart from its designed behaviour (in effect, the temporary glitch becomes a permanent glitch). Consider, for example, a two-inputAND gatefed with the following logic:output=A∧A¯{\displaystyle {\text{output}}=A\wedge {\overline {A}}}A logic signalA{\displaystyle A}on one input and its negation,¬A{\displaystyle \neg A}(the ¬ is aBoolean negation), on another input in theory never output a true value:A∧A¯≠1{\displaystyle A\wedge {\overline {A}}\neq 1}. If, however, changes in the value ofA{\displaystyle A}take longer to propagate to the second input than the first whenA{\displaystyle A}changes from false to true then a brief period will ensue during which both inputs are true, and so the gate's output will also be true.[2] A practical example of a race condition can occur when logic circuitry is used to detect certain outputs of a counter. If all the bits of the counter do not change exactly simultaneously, there will be intermediate patterns that can trigger false matches. Acritical race conditionoccurs when the order in which internal variables are changed determines the eventual state that thestate machinewill end up in. Anon-critical race conditionoccurs when the order in which internal variables are changed does not determine the eventual state that the state machine will end up in. Astatic race conditionoccurs when a signal and its complement are combined. Adynamic race conditionoccurs when it results in multiple transitions when only one is intended. They are due to interaction between gates. It can be eliminated by using no more than two levels of gating. Anessential race conditionoccurs when an input has two transitions in less than the total feedback propagation time. Sometimes they are cured using inductivedelay lineelements to effectively increase the time duration of an input signal. Design techniques such asKarnaugh mapsencourage designers to recognize and eliminate race conditions before they cause problems. Oftenlogic redundancycan be added to eliminate some kinds of races. As well as these problems, some logic elements can entermetastable states, which create further problems for circuit designers. A race condition can arise in software when a computer program has multiple code paths that are executing at the same time. If the multiple code paths take a different amount of time than expected, they can finish in a different order than expected, which can cause software bugs due to unanticipated behavior. A race can also occur between two programs, resulting in security issues. Critical race conditions cause invalid execution andsoftware bugs. Critical race conditions often happen when the processes or threads depend on some shared state. Operations upon shared states are done incritical sectionsthat must bemutually exclusive. Failure to obey this rule can corrupt the shared state. A data race is a type of race condition. Data races are important parts of various formalmemory models. The memory model defined in theC11andC++11standards specify that a C or C++ program containing a data race hasundefined behavior.[3][4] A race condition can be difficult to reproduce and debug because the end result isnondeterministicand depends on the relative timing between interfering threads. Problems of this nature can therefore disappear when running in debug mode, adding extra logging, or attaching a debugger. A bug that disappears like this during debugging attempts is often referred to as a "Heisenbug". It is therefore better to avoid race conditions by careful software design. Assume that two threads each increment the value of a global integer variable by 1. Ideally, the following sequence of operations would take place: In the case shown above, the final value is 2, as expected. However, if the two threads run simultaneously without locking or synchronization (viasemaphores), the outcome of the operation could be wrong. The alternative sequence of operations below demonstrates this scenario: In this case, the final value is 1 instead of the expected result of 2. This occurs because here the increment operations are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some resource such as a memory location. Not everyone regards data races as a subset of race conditions.[5]The precise definition of data race is specific to the formal concurrency model being used, but typically it refers to a situation where a memory operation in one thread could potentially attempt to access a memory location at the same time that a memory operation in another thread is writing to that memory location, in a context where this is dangerous. This implies that a data race is different from a race condition as it is possible to havenondeterminismdue to timing even in a program without data races, for example, in a program in which all memory accesses use onlyatomic operations. This can be dangerous because on many platforms, if two threads write to a memory location at the same time, it may be possible for the memory location to end up holding a value that is some arbitrary and meaningless combination of the bits representing the values that each thread was attempting to write; this could result in memory corruption if the resulting value is one that neither thread attempted to write (sometimes this is called a 'torn write'). Similarly, if one thread reads from a location while another thread is writing to it, it may be possible for the read to return a value that is some arbitrary and meaningless combination of the bits representing the value that the memory location held before the write, and of the bits representing the value being written. On many platforms, special memory operations are provided for simultaneous access; in such cases, typically simultaneous access using these special operations is safe, but simultaneous access using other memory operations is dangerous. Sometimes such special operations (which are safe for simultaneous access) are calledatomicorsynchronizationoperations, whereas the ordinary operations (which are unsafe for simultaneous access) are calleddataoperations. This is probably why the term isdatarace; on many platforms, where there is a race condition involving onlysynchronizationoperations, such a race may be nondeterministic but otherwise safe; but adatarace could lead to memory corruption or undefined behavior. The precise definition of data race differs across formal concurrency models. This matters because concurrent behavior is often non-intuitive and so formal reasoning is sometimes applied. TheC++ standard, in draft N4296 (2014-11-19), defines data race as follows in section 1.10.23 (page 14)[6] Two actions arepotentially concurrentif The execution of a program contains adata raceif it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below [omitted]. Any such data race results in undefined behavior. The parts of this definition relating to signal handlers are idiosyncratic to C++ and are not typical of definitions ofdata race. The paperDetecting Data Races on Weak Memory Systems[7]provides a different definition: "two memory operationsconflictif they access the same location and at least one of them is a write operation ... "Two memory operations, x and y, in a sequentially consistent execution form a race 〈x,y〉,iffx and y conflict, and they are not ordered by the hb1 relation of the execution. The race 〈x,y〉, is adata raceiff at least one of x or y is a data operation. Here we have two memory operations accessing the same location, one of which is a write. The hb1 relation is defined elsewhere in the paper, and is an example of a typical "happens-before" relation; intuitively, if we can prove that we are in a situation where one memory operation X is guaranteed to be executed to completion before another memory operation Y begins, then we say that "X happens-before Y". If neither "X happens-before Y" nor "Y happens-before X", then we say that X and Y are "not ordered by the hb1 relation". So, the clause "... and they are not ordered by the hb1 relation of the execution" can be intuitively translated as "... and X and Y are potentially concurrent". The paper considers dangerous only those situations in which at least one of the memory operations is a "data operation"; in other parts of this paper, the paper also defines a class of "synchronization operations" which are safe for potentially simultaneous use, in contrast to "data operations". TheJava Language Specification[8]provides a different definition: Two accesses to (reads of or writes to) the same variable are said to be conflicting if at least one of the accesses is a write ... When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race ... a data race cannot cause incorrect behavior such as returning the wrong length for an array. A critical difference between the C++ approach and the Java approach is that in C++, a data race is undefined behavior, whereas in Java, a data race merely affects "inter-thread actions".[8]This means that in C++, an attempt to execute a program containing a data race could (while still adhering to the spec) crash or could exhibit insecure or bizarre behavior, whereas in Java, an attempt to execute a program containing a data race may produce undesired concurrency behavior but is otherwise (assuming that the implementation adheres to the spec) safe. An important facet of data races is that in some contexts, a program that is free of data races is guaranteed to execute in asequentially consistentmanner, greatly easing reasoning about the concurrent behavior of the program. Formal memory models that provide such a guarantee are said to exhibit an "SC for DRF" (Sequential Consistency for Data Race Freedom) property. This approach has been said to have achieved recent consensus (presumably compared to approaches which guarantee sequential consistency in all cases, or approaches which do not guarantee it at all).[9] For example, in Java, this guarantee is directly specified:[8] A program is correctly synchronized if and only if all sequentially consistent executions are free of data races. If a program is correctly synchronized, then all executions of the program will appear to be sequentially consistent (§17.4.3). This is an extremely strong guarantee for programmers. Programmers do not need to reason about reorderings to determine that their code contains data races. Therefore they do not need to reason about reorderings when determining whether their code is correctly synchronized. Once the determination that the code is correctly synchronized is made, the programmer does not need to worry that reorderings will affect his or her code. A program must be correctly synchronized to avoid the kinds of counterintuitive behaviors that can be observed when code is reordered. The use of correct synchronization does not ensure that the overall behavior of a program is correct. However, its use does allow a programmer to reason about the possible behaviors of a program in a simple way; the behavior of a correctly synchronized program is much less dependent on possible reorderings. Without correct synchronization, very strange, confusing and counterintuitive behaviors are possible. By contrast, a draft C++ specification does not directly require an SC for DRF property, but merely observes that there exists a theorem providing it: [Note:It can be shown that programs that correctly use mutexes and memory_order_seq_cst operations to prevent all data races and use no other synchronization operations behave as if the operations executed by their constituent threads were simply interleaved, with each value computation of an object being taken from the last side effect on that object in that interleaving. This is normally referred to as “sequential consistency”. However, this applies only to data-race-free programs, and data-race-free programs cannot observe most program transformations that do not change single-threaded program semantics. In fact, most single-threaded program transformations continue to be allowed, since any program that behaves differently as a result must perform an undefined operation.— end note Note that the C++ draft specification admits the possibility of programs that are valid but use synchronization operations with a memory_order other than memory_order_seq_cst, in which case the result may be a program which is correct but for which no guarantee of sequentially consistency is provided. In other words, in C++, some correct programs are not sequentially consistent. This approach is thought to give C++ programmers the freedom to choose faster program execution at the cost of giving up ease of reasoning about their program.[9] There are various theorems, often provided in the form of memory models, that provide SC for DRF guarantees given various contexts. The premises of these theorems typically place constraints upon both the memory model (and therefore upon the implementation), and also upon the programmer; that is to say, typically it is the case that there are programs which do not meet the premises of the theorem and which could not be guaranteed to execute in a sequentially consistent manner. The DRF1 memory model[10]provides SC for DRF and allows the optimizations of the WO (weak ordering), RCsc (Release Consistencywith sequentially consistent special operations), VAX memory model, and data-race-free-0 memory models. The PLpc memory model[11]provides SC for DRF and allows the optimizations of the TSO (Total Store Order), PSO, PC (Processor Consistency), and RCpc (Release Consistencywith processor consistency special operations) models. DRFrlx[12]provides a sketch of an SC for DRF theorem in the presence of relaxed atomics. Many software race conditions have associatedcomputer securityimplications. A race condition allows an attacker with access to a shared resource to cause other actors that utilize that resource to malfunction, resulting in effects includingdenial of service[13]andprivilege escalation.[14][15] A specific kind of race condition involves checking for a predicate (e.g. forauthentication), then acting on the predicate, while the state can change between thetime-of-checkand thetime-of-use. When this kind ofbugexists in security-sensitive code, asecurity vulnerabilitycalled atime-of-check-to-time-of-use(TOCTTOU) bug is created. Race conditions are also intentionally used to createhardware random number generatorsandphysically unclonable functions.[16][citation needed]PUFs can be created by designing circuit topologies with identical paths to a node and relying on manufacturing variations to randomly determine which paths will complete first. By measuring each manufactured circuit's specific set of race condition outcomes, a profile can be collected for each circuit and kept secret in order to later verify a circuit's identity. Two or more programs may collide in their attempts to modify or access a file system, which can result in data corruption or privilege escalation.[14]File lockingprovides a commonly used solution. A more cumbersome remedy involves organizing the system in such a way that one unique process (running adaemonor the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process. This requires synchronization at the process level. A different form of race condition exists in file systems where unrelated programs may affect each other by suddenly using up available resources such as disk space, memory space, or processor cycles. Software not carefully designed to anticipate and handle this race situation may then become unpredictable. Such a risk may be overlooked for a long time in a system that seems very reliable. But eventually enough data may accumulate or enough other software may be added to critically destabilize many parts of a system. An example of this occurred withthe near loss of the Mars Rover "Spirit"not long after landing, which occurred due to deleted file entries causing the file system library to consume all available memory space.[17]A solution is for software to request and reserve all the resources it will need before beginning a task; if this request fails then the task is postponed, avoiding the many points where failure could have occurred. Alternatively, each of those points can be equipped with error handling, or the success of the entire task can be verified afterwards, before continuing. A more common approach is to simply verify that enough system resources are available before starting a task; however, this may not be adequate because in complex systems the actions of other running programs can be unpredictable. In networking, consider a distributed chat network likeIRC, where a user who starts a channel automatically acquires channel-operator privileges. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel. (This problem has been largelysolvedby various IRC server implementations.) In this case of a race condition, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, thelatencyacross the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource—say, appointing one server to control who holds what privileges—would mean turning the distributed network into a centralized one (at least for that one part of the network operation). Race conditions can also exist when a computer program is written withnon-blocking sockets, in which case the performance of the program can be dependent on the speed of the network link. Software flaws inlife-critical systemscan be disastrous. Race conditions were among the flaws in theTherac-25radiation therapymachine, which led to the death of at least three patients and injuries to several more.[18] Another example is theenergy management systemprovided byGE Energyand used byOhio-basedFirstEnergy Corp(among other power facilities). A race condition existed in the alarm subsystem; when three sagging power lines were tripped simultaneously, the condition prevented alerts from being raised to the monitoring technicians, delaying their awareness of the problem. This software flaw eventually led to theNorth American Blackout of 2003.[19]GE Energy later developed a software patch to correct the previously undiscovered error. Many software tools exist to help detect race conditions in software. They can be largely categorized into two groups:static analysistools anddynamic analysistools. Thread Safety Analysis is a static analysis tool for annotation-based intra-procedural static analysis, originally implemented as a branch of gcc, and now reimplemented inClang, supporting PThreads.[20][non-primary source needed] Dynamic analysis tools include: There are several benchmarks designed to evaluate the effectiveness of data race detection tools Race conditions are a common concern in human-computerinteraction designand softwareusability. Intuitively designed human-machine interfaces require that the user receives feedback on their actions that align with their expectations, but system-generated actions can interrupt a user's current action or workflow in unexpected ways, such as inadvertently answering or rejecting an incoming call on a smartphone while performing a different task.[citation needed] InUK railway signalling, a race condition would arise in the carrying out ofRule 55. According to this rule, if a train was stopped on a running line by a signal, the locomotive fireman would walk to the signal box in order to remind the signalman that the train was present. In at least one case, atWinwickin 1934, an accident occurred because the signalman accepted another train before the fireman arrived. Modern signalling practice removes the race condition by making it possible for the driver to instantaneously contact the signal box by radio. Race conditions are not confined to digital systems. Neuroscience is demonstrating that race conditions can occur in mammal brains as well, for example.[25][26]
https://en.wikipedia.org/wiki/Race_hazard
Superconducting logicrefers to a class oflogic circuitsorlogic gatesthat use the unique properties ofsuperconductors, including zero-resistance wires, ultrafastJosephson junctionswitches, and quantization of magnetic flux (fluxoid). As of 2023, superconducting computing is a form ofcryogenic computing, as superconductive electronic circuits require cooling tocryogenictemperatures for operation, typically below 10kelvin. Oftensuperconducting computingis applied toquantum computing, with an important application known assuperconducting quantum computing. Superconducting digital logic circuits use single flux quanta (SFQ), also known asmagnetic flux quanta, to encode, process, and transport data. SFQ circuits are made up of active Josephson junctions and passive elements such as inductors, resistors, transformers, and transmission lines. Whereas voltages and capacitors are important in semiconductor logic circuits such asCMOS, currents and inductors are most important in SFQ logic circuits. Power can be supplied by eitherdirect currentoralternating current, depending on the SFQ logic family. The primary advantage of superconducting computing is improved power efficiency over conventionalCMOStechnology. Much of the power consumed, and heat dissipated, by conventional processors comes from moving information between logic elements rather than the actual logic operations. Because superconductors have zero electricalresistance, little energy is required to move bits within the processor. This is expected to result in power consumption savings of a factor of 500 for anexascale computer.[1]For comparison, in 2014 it was estimated that a 1 exaFLOPScomputer built in CMOS logic is estimated to consume some 500 megawatts of electrical power.[2]Superconducting logic can be an attractive option for ultrafast CPUs, where switching times are measured in picoseconds and operating frequencies approach 770 GHz.[3][4]However, since transferring information between the processor and the outside world does still dissipate energy, superconducting computing was seen as well-suited for computations-intensive tasks where the data largely stays in the cryogenic environment, rather thanbig dataapplications where large amounts of information are streamed from outside the processor.[1] As superconducting logic supports standard digital machine architectures and algorithms, the existing knowledge base for CMOS computing will still be useful in constructing superconducting computers. However, given the reduced heat dissipation, it may enable innovations such asthree-dimensional stackingof components. However, as they requireinductors, it is harder to reduce their size. As of 2014, devices usingniobiumas the superconducting material operating at 4Kwere considered state-of-the-art. Important challenges for the field were reliable cryogenic memory, as well as moving from research on individual components to large-scale integration.[1] Josephson junction countis a measure of superconducting circuit or device complexity, similar to thetransistor countused for semiconductor integrated circuits. Superconducting computing research has been pursued by the U. S.National Security Agencysince the mid-1950s. However, progress could not keep up with theincreasing performanceof standard CMOS technology. As of 2016 there are no commercial superconducting computers, although research and development continues.[5] Research in the mid-1950s to early 1960s focused on thecryotroninvented byDudley Allen Buck, but the liquid-helium temperatures and the slow switching time between superconducting and resistive states caused this research to be abandoned. In 1962Brian Josephsonestablished the theory behind theJosephson effect, and within a few years IBM had fabricated the first Josephson junction. IBM invested heavily in this technology from the mid-1960s to 1983.[6]By the mid-1970s IBM had constructed asuperconducting quantum interference deviceusing these junctions, mainly working withlead-based junctions and later switching to lead/niobium junctions. In 1980 the Josephson computer revolution was announced by IBM through the cover page of the May issue of Scientific American. One of the reasons which justified such a large-scale investment lies in that Moore's law - enunciated in 1965 - was expected to slow down and reach a plateau 'soon'. However, on the one hand Moore's law kept its validity, while the costs of improving superconducting devices were basically borne entirely by IBM alone and the latter, however big, could not compete with the whole world of semiconductors which provided nearly limitless resources.[7]Thus, the program was shut down in 1983 because the technology was not considered competitive with standard semiconductor technology. Founded by researchers with this IBM program, HYPRES developed and commercialized superconductor integrated circuits from its commercial superconductor foundry in Elmsford, New York.[8]The JapaneseMinistry of International Trade and Industryfunded a superconducting research effort from 1981 to 1989 that produced theETL-JC1, which was a 4-bit machine with 1,000 bits of RAM.[5] In 1983,Bell Labscreated niobium/aluminum oxideJosephson junctions that were more reliable and easier to fabricate. In 1985, theRapid single flux quantumlogic scheme, which had improved speed andenergy efficiency, was developed by researchers atMoscow State University. These advances led to the United States' Hybrid Technology Multi-Threaded project, started in 1997, which sought to beat conventional semiconductors to the petaflop computing scale. The project was abandoned in 2000, however, and the first conventional petaflop computer was constructed in 2008. After 2000, attention turned tosuperconducting quantum computing. The 2011 introduction ofreciprocal quantum logicby Quentin Herr ofNorthrop Grumman, as well as energy-efficient rapid single flux quantum by Hypres, were seen as major advances.[5] The push forexascale computingbeginning in the mid-2010s, as codified in theNational Strategic Computing Initiative, was seen as an opening for superconducting computing research as exascale computers based on CMOS technology would be expected to require impractical amounts of electrical power. TheIntelligence Advanced Research Projects Activity, formed in 2006, currently coordinates theU. S. Intelligence Community's research and development efforts in superconducting computing.[5] Despite the names of many of these techniques containing the word "quantum", they are not necessarily platforms forquantum computing.[citation needed] Rapid single flux quantum(RSFQ) superconducting logic was developed in the Soviet Union in the 1980s.[9]Information is carried by the presence or absence of a single flux quantum (SFQ). TheJosephson junctionsarecritically damped, typically by addition of an appropriately sized shunt resistor, to make them switch without a hysteresis. Clocking signals are provided to logic gates by separately distributed SFQ voltage pulses. Power is provided by bias currents distributed using resistors that can consume more than 10 times as much static power than the dynamic power used for computation. The simplicity of using resistors to distribute currents can be an advantage in small circuits and RSFQ continues to be used for many applications where energy efficiency is not of critical importance. RSFQ has been used to build specialized circuits for high-throughput and numerically intensive applications, such as communications receivers and digital signal processing. Josephson junctions in RSFQ circuits are biased in parallel. Therefore, the total bias current grows linearly with the Josephson junction count. This currently presents the major limitation on the integration scale of RSFQ circuits, which does not exceed a few tens of thousands of Josephson junctions per circuit. Reducing the resistor (R) used to distribute currents in traditional RSFQ circuits and adding an inductor (L) in series can reduce the static power dissipation and improve energy efficiency.[10][11] Reducing the bias voltage in traditional RSFQ circuits can reduce the static power dissipation and improve energy efficiency.[12][13] Efficient rapid single flux quantum (ERSFQ) logic was developed to eliminate the static power losses of RSFQ by replacing bias resistors with sets of inductors and current-limiting Josephson junctions.[14][15] Efficient single flux quantum (eSFQ) logic is also powered by direct current, but differs from ERSFQ in the size of the bias current limiting inductor and how the limiting Josephson junctions are regulated.[16] Reciprocal Quantum Logic (RQL) was developed to fix some of the problems of RSFQ logic. RQL usesreciprocal pairsof SFQ pulses to encode a logical '1'. Both power and clock are provided by multi-phasealternating currentsignals. RQL gates do not use resistors to distribute power and thus dissipate negligible static power.[17] Major RQL gates include:AndOr,AnotB, Set/Reset (with nondestructive readout), which together form a universal logic set and provide memory capabilities.[18] Adiabatic Quantum flux parametron (AQFP) logic was developed for energy-efficient operation and is powered by alternating current.[19][20] On January 13, 2021, it was announced that a 2.5 GHz prototype AQFP-based processor called MANA (Monolithic Adiabatic iNtegration Architecture) had achieved an energy efficiency that was 80 times that of traditional semiconductor processors, even accounting for the cooling.[21] Superconducting quantum computing is a promising implementation ofquantum informationtechnology that involves nanofabricatedsuperconductingelectrodescoupled throughJosephson junctions. As in a superconducting electrode, the phase and the charge areconjugate variables. There exist three families of superconducting qubits, depending on whether the charge, the phase, or neither of the two are good quantum numbers. These are respectively termedcharge qubits,flux qubits, and hybrid qubits.
https://en.wikipedia.org/wiki/Superconducting_computing
This is alist of notable mathematicalconjectures. The following conjectures remain open. The (incomplete) column "cites" lists the number of results for aGoogle Scholarsearch for the term, in double quotes as of September 2022[update]. The conjecture terminology may persist: theorems often enough may still be referred to as conjectures, using the anachronistic names. The conjectures in following list were not necessarily generally accepted as true before being disproved. Inmathematics, ideas are supposedly not accepted as fact until they have been rigorously proved. However, there have been some ideas that were fairly accepted in the past but which were subsequently shown to be false. The following list is meant to serve as a repository for compiling a list of such ideas.
https://en.wikipedia.org/wiki/List_of_conjectures
There are many longstandingunsolved problems in mathematicsfor which a solution has still not yet been found. Thenotable unsolved problems instatisticsare generally of a different flavor; according to John Tukey,[1]"difficulties in identifying problems have delayed statistics far more than difficulties in solving problems." A list of "one or two open problems" (in fact 22 of them) was given byDavid Cox.[2]
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_statistics
The following is a list of notableunsolved problemsgrouped into broad areas ofphysics.[1] Some of the major unsolved problems inphysicsare theoretical, meaning that existingtheoriesseem incapable of explaining a certain observedphenomenonor experimental result. The others are experimental, meaning that there is a difficulty in creating anexperimentto test a proposed theory or investigate a phenomenon in greater detail. There are still some questionsbeyond the Standard Model of physics, such as thestrong CP problem,neutrino mass,matter–antimatter asymmetry, and the nature ofdark matteranddark energy.[2][3]Another problem lies within themathematical frameworkof theStandard Modelitself—the Standard Model is inconsistent with that ofgeneral relativity, to the point that one or both theories break down under certain conditions (for example within knownspacetimesingularitieslike theBig Bangand thecentresofblack holesbeyond theevent horizon).[4] Origin of Cosmic Magnetic Fields Observations reveal that magnetic fields are present throughout the universe, from galaxies to galaxy clusters. However, the mechanisms that generated these large-scale cosmic magnetic fields remain unclear. Understanding their origin is a significant unsolved problem in astrophysics.[65]
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_physics
List of unsolved problemsmay refer to several notableconjecturesoropen problemsin various academic fields:
https://en.wikipedia.org/wiki/Lists_of_unsolved_problems
Open Problems in Mathematicsis a book, edited byJohn Forbes Nash Jr.and Michael Th. Rassias, published in 2016 by Springer (ISBN978-3-319-32160-8). The book consists of seventeen expository articles, written by outstanding researchers, on some of the central open problems in the field of mathematics. The book also features an Introduction onJohn Nash: Theorems and Ideas, byMikhail Leonidovich Gromov. According to the editors’ Preface, each article is devoted to one open problem or a “constellation of related problems”.[1][2][3][4][5] Nash and Rassias write in the preface of the book that the open problems presented “were chosen for a variety of reasons. Some were chosen for their undoubtable importance and applicability, others because they constitute intriguing curiosities which remain unexplained mysteries on the basis of current knowledge and techniques, and some for more emotional reasons. Additionally, the attribute of a problem having a somewhatvintage flavorwas also influential” in their decision process.[6]
https://en.wikipedia.org/wiki/Open_Problems_in_Mathematics
The Great Mathematical Problems[note 1]is a 2013 book byIan Stewart. It discusses fourteen[1]mathematical problems and is written for laypersons.[2]The book has received positive reviews. Stewart describes important open or recently closed problems in mathematics: Ian Stewart belongs to a very small, very exclusive club of popular science and mathematics writers who are worth reading today. Kirkus Reviewssaid Stewart "succeed[ed] in illuminating many but not all of some very difficult ideas", and that the book "will enchant math enthusiasts as well as general readers who pay close attention".[1]Robert Schaefer from the New York Journal of Books described "The Great Mathematical Problems" as "both entertaining and accessible", although later noted that "in the end chapters ... explanations of the conjectures get more complicated".[3] Fred Bortz gave the book a positive review inThe Dallas Morning News, commenting "few authors are better at understanding their readers than the prolific mathematics writer Ian Stewart" and saying that "anyone who has always loved math for its own sake or for the way it provides new perspectives on important real-world phenomena will find hours of brain-teasing and mind-challenging delight in the British professor’s survey of recently answered or still open mathematical questions".[4]
https://en.wikipedia.org/wiki/The_Great_Mathematical_Problems
TheScottish Book(Polish:Księga Szkocka) was a thick notebook used by mathematicians of theLwów School of MathematicsinPolandfor jotting down problems meant to be solved. The notebook was named after the "Scottish Café" where it was kept. Originally, the mathematicians who gathered at the cafe would write down the problems and equations directly on the cafe's marble table tops, but these would be erased at the end of each day, and so the record of the preceding discussions would be lost. The idea for the book was most likely originally suggested byStefan Banach's wife, Łucja Banach. Stefan or Łucja Banach purchased a large notebook and left it with the proprietor of the cafe.[1][2] The Scottish Café (Polish:Kawiarnia Szkocka) was the café inLwów(nowLviv,Ukraine) where, in the 1930s and 1940s, mathematicians from theLwów Schoolcollaboratively discussedresearch problems, particularly infunctional analysisandtopology. Stanislaw Ulamrecounts that the tables of the café hadmarbletops, so they could write in pencil, directly on the table, during their discussions. To keep the results from being lost, and after becoming annoyed with their writing directly on the table tops,Stefan Banach's wife provided the mathematicians with a large notebook, which was used for writing the problems and answers and eventually became known as theScottish Book. The book—a collection of solved, unsolved, and even probably unsolvable problems—could be borrowed by any of the guests of the café. Solving any of the problems was rewarded with prizes, with the most difficult and challenging problems having expensive prizes (during theGreat Depressionand on the eve ofWorld War II), such as a bottle of fine brandy.[3] For problem 153, which was later recognized as being closely related to Stefan Banach's "basis problem",Stanisław Mazuroffered the prize of a live goose. This problem was solved only in 1972 byPer Enflo, who was presented with the live goose in a ceremony that was broadcast throughout Poland.[4] The café building used to house theUniversal Bank[uk]at the street address of 27Taras ShevchenkoProspekt. The original cafe was renovated in May 2014 and contains a copy of the Scottish Book. A total of 193 problems were written down in the book.[1]Stanisław Mazurcontributed a total of 43 problems, 24 of them as a single author and 19 together with Stefan Banach.[5]Banach himself wrote 14, plus another 11 withStanisław Ulamand Mazur. Ulam wrote 40 problems and additional 15 ones with others.[1] During theSoviet occupation of Lwów, several Russian mathematicians visited the city and also added problems to the book.[2] Hugo Steinhauscontributed the last problem on 31 May 1941, shortly before theGerman attack on the Soviet Union;[6][7]this problem involved a question about thelikely distributionof matches within a matchbox, a problem motivated by Banach's habit ofchain smokingcigarettes.[1] After World War II, an English translation annotated by Ulam was published byLos Alamos National Laboratoryin 1957.[8]After World War II, Steinhaus at theUniversity of Wrocławrevived the tradition of the Scottish book by initiatingThe New Scottish Bookin 1945-1958. The tradition of the Scottish Book continues to inspire not only mathematicians but also educators in other fields. Piotr Kowzan proposed a "goose method" as a pedagogical tool for marking open problems and encouraging future research. Inspired by the eccentric rewards in the Scottish Book, this approach aims to foster curiosity and knowledge-building across generations.[9] The following mathematicians were associated with theLwów School of Mathematicsor contributed toThe Scottish Book: 49°50′09″N24°1′57″E / 49.83583°N 24.03250°E /49.83583; 24.03250
https://en.wikipedia.org/wiki/Scottish_Book
Inmathematical logic, given anunsatisfiableBooleanpropositional formulainconjunctive normal form, a subset of clauses whose conjunction is still unsatisfiable is called anunsatisfiable coreof the original formula. ManySAT solverscan produce aresolution graphwhich proves the unsatisfiability of the original problem. This can be analyzed to produce a smaller unsatisfiable core. An unsatisfiable core is called aminimal unsatisfiable core, if every proper subset (allowing removal of any arbitrary clause or clauses) of it is satisfiable. Thus, such a core is alocal minimum, though not necessarily a global one. There are several practical methods of computing minimal unsatisfiable cores.[1][2] Aminimum unsatisfiable corecontains the smallest number of the original clauses required to still be unsatisfiable. No practical algorithms for computing the minimum unsatisfiable core are known,[3]and computing a minimum unsatisfiable core of an input formula inconjunctive normal formisΣ2P{\displaystyle \Sigma _{2}^{\mathsf {P}}}-complete problem.[4]Notice the terminology: whereas theminimalunsatisfiable corewas alocalproblem with an easy solution, theminimumunsatisfiable coreis aglobalproblem with no known easy solution. Thislogic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Unsatisfiable_core
Incomputer scienceandmathematical logic,satisfiability modulo theories(SMT) is theproblemof determining whether amathematical formulaissatisfiable. It generalizes theBoolean satisfiability problem(SAT) to more complex formulas involvingreal numbers,integers, and/or variousdata structuressuch aslists,arrays,bit vectors, andstrings. The name is derived from the fact that these expressions are interpreted within ("modulo") a certainformal theoryinfirst-order logic with equality(often disallowingquantifiers).SMT solversare tools that aim to solve the SMT problem for a practical subset of inputs. SMT solvers such asZ3andcvc5have been used as a building block for a wide range of applications across computer science, including inautomated theorem proving,program analysis,program verification, andsoftware testing. Since Boolean satisfiability is alreadyNP-complete, the SMT problem is typicallyNP-hard, and for many theories it isundecidable. Researchers study which theories or subsets of theories lead to a decidable SMT problem and thecomputational complexityof decidable cases. The resulting decision procedures are often implemented directly in SMT solvers; see, for instance, the decidability ofPresburger arithmetic. SMT can be thought of as aconstraint satisfaction problemand thus a certain formalized approach toconstraint programming. Formally speaking, an SMT instance is aformulainfirst-order logic, where some function and predicate symbols have additional interpretations, and SMT is the problem of determining whether such a formula is satisfiable. In other words, imagine an instance of theBoolean satisfiability problem(SAT) in which some of thebinary variablesare replaced bypredicatesover a suitable set of non-binary variables. A predicate is a binary-valued function of non-binary variables. Example predicates include linearinequalities(e.g.,3x+2y−z≥4{\displaystyle 3x+2y-z\geq 4}) or equalities involvinguninterpreted termsand function symbols (e.g.,f(f(u,v),v)=f(u,v){\displaystyle f(f(u,v),v)=f(u,v)}wheref{\displaystyle f}is some unspecified function of two arguments). These predicates are classified according to each respective theory assigned. For instance, linear inequalities overrealvariables are evaluated using the rules of the theory of linear realarithmetic, whereas predicates involving uninterpreted terms and function symbols are evaluated using the rules of the theory ofuninterpreted functionswith equality (sometimes referred to as theempty theory). Other theories include the theories ofarraysandliststructures (useful for modeling and verifyingcomputer programs), and the theory ofbit vectors(useful in modeling and verifyinghardware designs). Subtheories are also possible: for example, difference logic is a sub-theory of linear arithmetic in which each inequality is restricted to have the formx−y>c{\displaystyle x-y>c}for variablesx{\displaystyle x}andy{\displaystyle y}and constantc{\displaystyle c}. The examples above show the use of Linear Integer Arithmetic over inequalities. Other examples include: Most SMT solvers support onlyquantifier-free fragments of their logics.[citation needed] There is substantial overlap between SMT solving andautomated theorem proving(ATP). Generally, automated theorem provers focus on supporting full first-order logic with quantifiers, whereas SMT solvers focus more on supporting various theories (interpreted predicate symbols). ATPs excel at problems with lots of quantifiers, whereas SMT solvers do well on large problems without quantifiers.[1]The line is blurry enough that some ATPs participate in SMT-COMP, while some SMT solvers participate inCASC.[2] An SMT instance is a generalization of aBoolean SATinstance in which various sets of variables are replaced bypredicatesfrom a variety of underlying theories. SMT formulas provide a much richermodeling languagethan is possible with Boolean SAT formulas. For example, an SMT formula allows one to model thedatapathoperations of amicroprocessorat the word rather than the bit level. By comparison,answer set programmingis also based on predicates (more precisely, onatomic sentencescreated fromatomic formulas). Unlike SMT, answer-set programs do not have quantifiers, and cannot easily express constraints such as linear arithmetic ordifference logic—answer set programming is best suited to Boolean problems that reduce to thefree theoryof uninterpreted functions. Implementing 32-bit integers as bitvectors in answer set programming suffers from most of the same problems that early SMT solvers faced: "obvious" identities such asx+y=y+xare difficult to deduce. Constraint logic programmingdoes provide support for linear arithmetic constraints, but within a completely different theoretical framework.[citation needed]SMT solvers have also been extended to solve formulas inhigher-order logic.[3] Early attempts for solving SMT instances involved translating them to Boolean SAT instances (e.g., a 32-bit integer variable would be encoded by 32 single-bit variables with appropriate weights and word-level operations such as 'plus' would be replaced by lower-level logic operations on the bits) and passing this formula to a Boolean SAT solver. This approach, which is referred to astheeagerapproach(orbitblasting), has its merits: by pre-processing the SMT formula into an equivalent Boolean SAT formula existing Boolean SAT solvers can be used "as-is" and their performance and capacity improvements leveraged over time. On the other hand, the loss of the high-level semantics of the underlying theories means that the Boolean SAT solver has to work a lot harder than necessary to discover "obvious" facts (such asx+y=y+x{\displaystyle x+y=y+x}for integer addition.) This observation led to the development of a number of SMT solvers that tightly integrate the Boolean reasoning of aDPLL-style search with theory-specific solvers (T-solvers) that handleconjunctions(ANDs) of predicates from a given theory. This approach is referred to asthelazyapproach.[4] DubbedDPLL(T),[5]this architecture gives the responsibility of Boolean reasoning to the DPLL-based SAT solver which, in turn, interacts with a solver for theory T through a well-defined interface. The theory solver only needs to worry about checking the feasibility of conjunctions of theory predicates passed on to it from the SAT solver as it explores the Boolean search space of the formula. For this integration to work well, however, the theory solver must be able to participate in propagation and conflict analysis, i.e., it must be able to infer new facts from already established facts, as well as to supply succinct explanations of infeasibility when theory conflicts arise. In other words, the theory solver must be incremental andbacktrackable. Researchers study which theories or subsets of theories lead to a decidable SMT problem and thecomputational complexityof decidable cases. Since fullfirst-order logicis onlysemidecidable, one line of research attempts to find efficient decision procedures for fragments of first-order logic such aseffectively propositional logic.[6] Another line of research involves the development of specializeddecidable theories, including linear arithmetic overrationalsandintegers, fixed-width bitvectors,[7]floating-point arithmetic(often implemented in SMT solvers viabit-blasting, i.e., reduction to bitvectors),[8][9]strings,[10](co)-datatypes,[11]sequences(used to modeldynamic arrays),[12]finitesetsandrelations,[13][14]separation logic,[15]finite fields,[16]anduninterpreted functionsamong others. Boolean monotonic theoriesare a class of theory that support efficient theory propagation and conflict analysis, enabling practical use within DPLL(T) solvers.[17]Monotonic theories support only boolean variables (boolean is the onlysort), and all their functions and predicatespobey the axiom p(…,bi−1,0,bi+1,…)⟹p(…,bi−1,1,bi+1,…){\displaystyle p(\ldots ,b_{i-1},0,b_{i+1},\ldots )\implies p(\ldots ,b_{i-1},1,b_{i+1},\ldots )} Examples of monotonic theories includegraph reachability, collision detection forconvex hulls,minimum cuts, andcomputation tree logic.[18]EveryDatalogprogram can be interpreted as a monotonic theory.[19] Most of the common SMT approaches supportdecidabletheories. However, many real-world systems, such as an aircraft and its behavior, can only be modelled by means of non-linear arithmetic over the real numbers involvingtranscendental functions. This fact motivates an extension of the SMT problem to non-linear theories, such as determining whether the following equation is satisfiable: where Such problems are, however,undecidablein general. (On the other hand, the theory ofreal closed fields, and thus the full first order theory of thereal numbers, aredecidableusingquantifier elimination. This is due toAlfred Tarski.) The first order theory of thenatural numberswith addition (but not multiplication), calledPresburger arithmetic, is also decidable. Since multiplication by constants can be implemented as nested additions, the arithmetic in many computer programs can be expressed using Presburger arithmetic, resulting in decidable formulas. Examples of SMT solvers addressing Boolean combinations of theory atoms from undecidable arithmetic theories over the reals are ABsolver,[20]which employs a classical DPLL(T) architecture with a non-linear optimization packet as (necessarily incomplete) subordinate theory solver,iSAT, building on a unification of DPLL SAT-solving andinterval constraint propagationcalled the iSAT algorithm,[21]andcvc5.[22] The table below summarizes some of the features of the many available SMT solvers. The column "SMT-LIB" indicates compatibility with the SMT-LIB language; many systems marked 'yes' may support only older versions of SMT-LIB, or offer only partial support for the language. The column "CVC" indicates support for theCVClanguage. The column "DIMACS" indicates support for theDIMACSformat. Projects differ not only in features and performance, but also in the viability of the surrounding community, its ongoing interest in a project, and its ability to contribute documentation, fixes, tests and enhancements. There are multiple attempts to describe a standardized interface to SMT solvers (andautomated theorem provers, a term often used synonymously). The most prominent is the SMT-LIB standard,[citation needed]which provides a language based onS-expressions. Other standardized formats commonly supported are the DIMACS format[citation needed]supported by many Boolean SAT solvers, and the CVC format[citation needed]used by the CVC automated theorem prover. The SMT-LIB format also comes with a number of standardized benchmarks and has enabled a yearly competition between SMT solvers called SMT-COMP. Initially, the competition took place during theComputer Aided Verificationconference (CAV),[23][24]but as of 2020 the competition is hosted as part of the SMT Workshop, which is affiliated with theInternational Joint Conference on Automated Reasoning(IJCAR).[25] SMT solvers are useful both for verification, proving thecorrectnessof programs, software testing based onsymbolic execution, and forsynthesis, generating program fragments by searching over the space of possible programs. Outside of software verification, SMT solvers have also been used fortype inference[26][27]and for modelling theoretic scenarios, including modelling actor beliefs in nucleararms control.[28] Computer-aidedverification of computer programsoften uses SMT solvers. A common technique is to translate preconditions, postconditions, loop conditions, and assertions into SMT formulas in order to determine if all properties can hold. There are many verifiers built on top of theZ3 SMT solver.Boogieis an intermediate verification language that uses Z3 to automatically check simple imperative programs. TheVCCverifier for concurrent C uses Boogie, as well asDafnyfor imperative object-based programs,Chalicefor concurrent programs, andSpec#for C#.F*is a dependently typed language that uses Z3 to find proofs; the compiler carries these proofs through to produce proof-carrying bytecode. TheViper verification infrastructureencodes verification conditions to Z3. Thesbvlibrary provides SMT-based verification of Haskell programs, and lets the user choose among a number of solvers such as Z3, ABC, Boolector, cvc5, MathSAT and Yices. There are also many verifiers built on top of theAlt-ErgoSMT solver. Here is a list of mature applications: Many SMT solvers implement a common interface format calledSMTLIB2(such files usually have the extension ".smt2"). TheLiquidHaskelltool implements a refinement type based verifier for Haskell that can use any SMTLIB2 compliant solver, e.g. cvc5, MathSat, or Z3. An important application of SMT solvers issymbolic executionfor analysis and testing of programs (e.g.,concolic testing), aimed particularly at finding security vulnerabilities.[citation needed]Example tools in this category includeSAGEfromMicrosoft Research,KLEE,S2E, andTriton. SMT solvers that have been used for symbolic-execution applications includeZ3,STPArchived2015-04-06 at theWayback Machine, theZ3str family of solvers, andBoolector.[citation needed] SMT solvers have been integrated with proof assistants, includingCoq[29]andIsabelle/HOL.[30]
https://en.wikipedia.org/wiki/Satisfiability_modulo_theories
Incomputer science, theSharp Satisfiability Problem(sometimes calledSharp-SAT,#SATormodel counting) is the problem of counting the number ofinterpretationsthatsatisfya givenBooleanformula, introduced by Valiant in 1979.[1]In other words, it asks in how many ways the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formulaevaluates to TRUE. For example, the formulaa∨¬b{\displaystyle a\lor \neg b}is satisfiable by three distinct boolean value assignments of the variables, namely, for any of the assignments (a{\displaystyle a}= TRUE,b{\displaystyle b}= FALSE), (a{\displaystyle a}= FALSE,b{\displaystyle b}= FALSE), and (a{\displaystyle a}= TRUE,b{\displaystyle b}= TRUE), we havea∨¬b=TRUE.{\displaystyle a\lor \neg b={\textsf {TRUE}}.} #SAT is different fromBoolean satisfiability problem(SAT), which asks if there existsa solutionof Boolean formula. Instead, #SAT asks to enumerateallthe solutionsto a Boolean Formula. #SAT is harder than SAT in the sense that, once the total number of solutions to a Boolean formula is known, SAT can be decided in constant time. However, the converse is not true, because knowing a Boolean formula hasa solutiondoes not help us to countall the solutions, as there are an exponential number of possibilities. #SAT is a well-known example of the class ofcounting problems, known as#P-complete(read as sharp P complete). In other words, every instance of a problem in the complexity class#Pcan be reduced to an instance of the #SAT problem. This is an important result because many difficult counting problems arise inEnumerative Combinatorics,Statistical physics, Network Reliability, andArtificial intelligencewithout any known formula. If a problem is shown to be hard, then it provides acomplexity theoreticexplanation for the lack of nice looking formulas.[2] #SAT is#P-complete. To prove this, first note that #SAT is obviously in #P. Next, we prove that #SAT is #P-hard. Take any problem #A in #P. We know that A can be solved using aNon-deterministic Turing MachineM. On the other hand, from the proof forCook-Levin Theorem, we know that we can reduce M to a boolean formula F. Now, each valid assignment of F corresponds to a unique acceptable path in M, and vice versa. However, each acceptable path taken by M represents a solution to A. In other words, there is a bijection between the valid assignments of F and the solutions to A. So, the reduction used in the proof for Cook-Levin Theorem is parsimonious. This implies that #SAT is #P-hard. Counting solutions is intractable (#P-complete) in many special cases for which satisfiability is tractable (in P), as well as when satisfiability is intractable (NP-complete). This includes the following. This is the counting version of3SAT. One can show that any formula in SATcan be rewrittenas a formula in 3-CNFform preserving the number of satisfying assignments. Hence, #SAT and #3SAT are counting equivalent and #3SAT is #P-complete as well. Even though2SAT(deciding whether a 2CNF formula has a solution) is polynomial, counting the number of solutions is#P-complete.[3]The #P-completeness already in the monotone case, i.e., when there are nonegations(#MONOTONE-2-CNF). It is known that, assuming thatNPis different fromRP, #MONOTONE-2-CNF also cannot beapproximatedby a fullypolynomial-time approximation scheme(FPRAS), even assuming that each variable occurs in at most 6 clauses, but that a fully polynomial-time approximation scheme (FPTAS) exists when each variable occurs in at most 5 clauses:[4]this follows from analogous results on the problem♯ISof counting the number ofindependent setsingraphs. Similarly, even thoughHorn-satisfiabilityis polynomial, counting the number of solutions is #P-complete. This result follows from a general dichotomy characterizing which SAT-like problems are #P-complete.[5] This is the counting version ofPlanar 3SAT. The hardness reduction from 3SAT to Planar 3SAT given by Lichtenstein[6]is parsimonious. This implies that Planar #3SAT is #P-complete. This is the counting version of Planar Monotone Rectilinear 3SAT.[7]The NP-hardness reduction given by de Berg & Khosravi[7]is parsimonious. Therefore, this problem is #P-complete as well. Fordisjunctive normal form(DNF) formulas, counting the solutions is also#P-complete, even when all clauses have size 2 and there are nonegations: this is because, byDe Morgan's laws, counting the number of solutions of a DNF amounts to counting the number of solutions of the negation of aconjunctive normal form(CNF) formula. Intractability even holds in the case known as #PP2DNF, where the variables are partitioned into two sets, with each clause containing one variable from each set.[8] By contrast, it is possible to tractably approximate the number of solutions of a disjunctive normal form formula using theKarp-Luby algorithm, which is an FPRAS for this problem.[9] The variant of SAT corresponding to affine relations in the sense ofSchaefer's dichotomy theorem, i.e., where clauses amount to equations modulo 2 with theXORoperator, is the only SAT variant for which the #SAT problem can be solved in polynomial time.[10] If the instances to SAT are restricted usinggraphparameters, the #SAT problem can become tractable. For instance, #SAT on SAT instances whosetreewidthis bounded by a constant can be performed inpolynomial time.[11]Here, the treewidth can be the primal treewidth, dual treewidth, or incidence treewidth of thehypergraphassociated to the SAT formula, whose vertices are the variables and where each clause is represented as a hyperedge. Model counting is tractable (solvable in polynomial time) for (ordered)BDDsand for some circuit formalisms studied inknowledge compilation, such asd-DNNFs. Weighted model counting (WMC) generalizes #SAT by computing a linear combination of the models instead of just counting the models. In the literal-weighted variant of WMC, each literal gets assigned a weight, such thatWMC(ϕ;w)=∑M⊨ϕ∏l∈Mw(l){\displaystyle {\text{WMC}}(\phi ;w)=\sum _{M\models \phi }\prod _{l\in M}w(l)}. WMC is used for probabilistic inference, as probabilistic queries over discrete random variables such as inBayesian networkscan be reduced to WMC.[12] Algebraic model counting further generalizes #SAT and WMC over arbitrary commutativesemirings.[13]
https://en.wikipedia.org/wiki/Sharp-SAT
Incomputer science, theplanar 3-satisfiability problem(abbreviatedPLANAR 3SATorPL3SAT) is an extension of the classicalBoolean 3-satisfiability problemto aplanarincidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can beembeddedon aplane—can be consistently replaced by the values TRUE or FALSE in such a way that the formulaevaluates to TRUE. If this is the case, the formula is calledsatisfiable. On the other hand, if no such assignment exists, the function expressed by the formula isFALSEfor all possible variable assignments and the formula isunsatisfiable. For example, the formula "aAND NOTb" is satisfiable because one can find the valuesa= TRUE andb= FALSE, which make (aAND NOTb) = TRUE. In contrast, "aAND NOTa" is unsatisfiable. Like3SAT, PLANAR-SAT isNP-complete, and is commonly used inreductions. Every 3SAT problem can be converted to an incidence graph in the following manner: For every variablevi{\displaystyle v_{i}}, the graph has one corresponding nodevi{\displaystyle v_{i}}, and for every clausecj{\displaystyle c_{j}}, the graph has one corresponding nodecj.{\displaystyle c_{j}.}An edge(vi,cj){\displaystyle (v_{i},c_{j})}is created between variablevi{\displaystyle v_{i}}and clausecj{\displaystyle c_{j}}whenevervi{\displaystyle v_{i}}or¬vi{\displaystyle \lnot v_{i}}is incj{\displaystyle c_{j}}. Positive and negative literals are distinguished usingedge colorings. The formula is satisfiableif and only ifthere is a way to assign TRUE or FALSE to each variable node such that every clause node is connected to at least one TRUE by a positive edge or FALSE by a negative edge. Aplanar graphis a graph that can be drawn on the plane in a way such that no two of its edges cross each other. Planar 3SAT is a subset of 3SAT in which theincidence graphof the variables and clauses of aBooleanformulais planar. It is important because it is a restricted variant, and is still NP-complete. Many problems (for example games and puzzles) cannot represent non-planar graphs. Hence, Planar 3SAT provides a way to prove those problems to be NP-hard. The following proof sketch follows the proof of D. Lichtenstein.[1] Trivially, PLANAR 3SAT is inNP. It is thus sufficient to show that it isNP-hardvia reduction from3SAT. This proof makes use of the fact that(¬a∨¬b∨c)∧(a∨¬c)∧(b∨¬c){\displaystyle (\lnot a\lor \lnot b\lor c)\land (a\lor \lnot c)\land (b\lor \lnot c)}is equivalent to(a∧b)↔c{\displaystyle (a\land b)\leftrightarrow c}and that(a∨¬b)∧(¬a∨b){\displaystyle (a\lor \lnot b)\land (\lnot a\lor b)}is equivalent toa↔b{\displaystyle a\leftrightarrow b}. First, draw the incidence graph of the 3SAT formula. Since no two variables or clauses are connected, the resulting graph will bebipartite. Suppose the resulting graph is not planar. For every crossing of edges (a,c1) and (b,c2), introduce nine new variablesa1,b1,α,β,γ,δ,ξ,a2,b2, and replace every crossing of edges with a crossover gadget shown in the diagram. It consists of the following new clauses: (¬a2∨¬b2∨α)∧(a2∨¬α)∧(b2∨¬α),i.e.,a2∧b2↔α(¬a2∨b1∨β)∧(a2∨¬β)∧(¬b1∨¬β),i.e.,a2∧¬b1↔β(a1∨b1∨γ)∧(¬a1∨¬γ)∧(¬b1∨¬γ),i.e.,¬a1∧¬b1↔γ(a1∨¬b2∨δ)∧(¬a1∨¬δ)∧(b2∨¬δ),i.e.,¬a1∧b2↔δ(α∨β∨ξ)∧(γ∨δ∨¬ξ),i.e.,α∨β∨γ∨δ(¬α∨¬β)∧(¬β∨¬γ)∧(¬γ∨¬δ)∧(¬δ∨¬α),(a2∨¬a)∧(a∨¬a2)∧(b2∨¬b)∧(b∨¬b2),i.e.,a↔a2,b↔b2{\displaystyle {\begin{array}{ll}(\lnot a_{2}\lor \lnot b_{2}\lor \alpha )\land (a_{2}\lor \lnot \alpha )\land (b_{2}\lor \lnot \alpha ),&\quad {\text{i.e.,}}\quad a_{2}\land b_{2}\leftrightarrow \alpha \\(\lnot a_{2}\lor b_{1}\lor \beta )\land (a_{2}\lor \lnot \beta )\land (\lnot b_{1}\lor \lnot \beta ),&\quad {\text{i.e.,}}\quad a_{2}\land \lnot b_{1}\leftrightarrow \beta \\(a_{1}\lor b_{1}\lor \gamma )\land (\lnot a_{1}\lor \lnot \gamma )\land (\lnot b_{1}\lor \lnot \gamma ),&\quad {\text{i.e.,}}\quad \lnot a_{1}\land \lnot b_{1}\leftrightarrow \gamma \\(a_{1}\lor \lnot b_{2}\lor \delta )\land (\lnot a_{1}\lor \lnot \delta )\land (b_{2}\lor \lnot \delta ),&\quad {\text{i.e.,}}\quad \lnot a_{1}\land b_{2}\leftrightarrow \delta \\(\alpha \lor \beta \lor \xi )\land (\gamma \lor \delta \lor \lnot \xi ),&\quad {\text{i.e.,}}\quad \alpha \lor \beta \lor \gamma \lor \delta \\(\lnot \alpha \lor \lnot \beta )\land (\lnot \beta \lor \lnot \gamma )\land (\lnot \gamma \lor \lnot \delta )\land (\lnot \delta \lor \lnot \alpha ),&\\(a_{2}\lor \lnot a)\land (a\lor \lnot a_{2})\land (b_{2}\lor \lnot b)\land (b\lor \lnot b_{2}),&\quad {\text{i.e.,}}\quad a\leftrightarrow a_{2},~b\leftrightarrow b_{2}\\\end{array}}} If the edge (a,c1) is inverted in the original graph, (a1,c1) should be inverted in the crossover gadget. Similarly if the edge (b,c2) is inverted in the original, (b1,c2) should be inverted. One can easily show that these clauses are satisfiable if and only ifa↔a1{\displaystyle a\leftrightarrow a_{1}}andb↔b1{\displaystyle b\leftrightarrow b_{1}}. This algorithm shows that it is possible to convert each crossing into its planar equivalent using only a constant amount of new additions. Since the number of crossings is polynomial in terms of the number of clauses and variables, the reduction is polynomial.[2] Reduction from Planar SAT is a commonly used method in NP-completeness proofs of logic puzzles. Examples of these includeFillomino,[10]Nurikabe,[11]Shakashaka,[12]Tatamibari,[13]andTentai Show.[14]These proofs involve constructing gadgets that can simulate wires carrying signals (Boolean values), input and output gates, signal splitters, NOT gates and AND (or OR) gates in order to represent the planar embedding of any Boolean circuit. Since the circuits are planar, crossover of wires do not need to be considered. This is the problem of deciding whether a polygonal chain with fixed edge lengths and angles has a planar configuration without crossings. It has been proven to bestrongly NP-hardvia a reduction from planar monotone rectilinear 3SAT.[15] This is the problem ofpartitioning a polygoninto simpler polygons such that the total length of all edges used in the partition is as small as possible. When the figure is arectilinear polygonand it should be partitioned into rectangles, and the polygon is hole-free, then the problem is polynomial. But if it contains holes (even degenerate holes—single points), the problem is NP-hard, by reduction from Planar SAT. The same holds if the figure is any polygon and it should be partitioned into convex figures.[16] A related problem isminimum-weight triangulation- finding atriangulationof minimal total edge length. The decision version of this problem is proven to be NP-complete via a reduction from a variant of Planar 1-in-3SAT.[17]
https://en.wikipedia.org/wiki/Planar_SAT
TheKarloff–Zwick algorithm, incomputational complexity theory, is arandomisedapproximation algorithmtaking an instance ofMAX-3SATBoolean satisfiability problemas input. If the instance is satisfiable, then the expected weight of the assignment found is at least 7/8 of optimal. There is strong evidence (but not amathematical proof) that the algorithm achieves 7/8 of optimal even on unsatisfiable MAX-3SAT instances.Howard KarloffandUri Zwickpresented the algorithm in 1997.[1] The algorithm is based onsemidefinite programming. It can be derandomized using, e.g., the techniques from[2]to yield a deterministicpolynomial-timealgorithm with the same approximation guarantees. For the related MAX-E3SAT problem, in which all clauses in the input 3SAT formula are guaranteed to have exactly three literals, the simplerandomizedapproximation algorithmwhich assigns a truth value to each variable independently and uniformly at random satisfies 7/8 of all clauses in expectation, irrespective of whether the original formula is satisfiable. Further, this simple algorithm can also be easilyderandomizedusing themethod of conditional expectations. The Karloff–Zwick algorithm, however, does not require the restriction that the input formula should have three literals in every clause.[1] Building upon previous work on thePCP theorem,Johan Håstadshowed that, assuming P ≠ NP, no polynomial-time algorithm for MAX 3SAT can achieve a performance ratio exceeding 7/8, even when restricted to satisfiable instances of the problem in which each clause contains exactly three literals. Both the Karloff–Zwick algorithm and the above simple algorithm are therefore optimal in this sense.[3]
https://en.wikipedia.org/wiki/Karloff%E2%80%93Zwick_algorithm
TheCircuit Value Problem(or Circuit Evaluation Problem) is the computational problem of computing the output of a givenBoolean circuiton a given input. The problem is complete forPunder uniformAC0reductions. Note that, in terms oftime complexity, it can be solved inlinear timesimply by atopological sort. TheBoolean Formula Value Problem(or Boolean Formula Evaluation Problem) is the special case of the problem when the circuit is a tree. The Boolean Formula Value Problem is complete forNC1.[1] The problem is closely related to theBoolean Satisfiability Problemwhich is complete forNPand its complement, thePropositional Tautology Problem, which is complete forco-NP. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Circuit_Value_Problem
Inmathematical logic, aformulaissatisfiableif it is true under some assignment of values to itsvariables. For example, the formulax+3=y{\displaystyle x+3=y}is satisfiable because it is true whenx=3{\displaystyle x=3}andy=6{\displaystyle y=6}, while the formulax+1=x{\displaystyle x+1=x}is not satisfiable over the integers. The dual concept to satisfiability isvalidity; a formula isvalidif every assignment of values to its variables makes the formula true. For example,x+3=3+x{\displaystyle x+3=3+x}is valid over the integers, butx+3=y{\displaystyle x+3=y}is not. Formally, satisfiability is studied with respect to a fixed logic defining thesyntaxof allowed symbols, such asfirst-order logic,second-order logicorpropositional logic. Rather than being syntactic, however, satisfiability is asemanticproperty because it relates to themeaningof the symbols, for example, the meaning of+{\displaystyle +}in a formula such asx+1=x{\displaystyle x+1=x}. Formally, we define aninterpretation(ormodel) to be an assignment of values to the variables and an assignment of meaning to all other non-logical symbols, and a formula is said to be satisfiable if there is some interpretation which makes it true.[1]While this allows non-standard interpretations of symbols such as+{\displaystyle +}, one can restrict their meaning by providing additionalaxioms. Thesatisfiability modulo theoriesproblem considers satisfiability of a formula with respect to aformal theory, which is a (finite or infinite) set of axioms. Satisfiability and validity are defined for a single formula, but can be generalized to an arbitrary theory or set of formulas: a theory is satisfiable if at least one interpretation makes every formula in the theory true, and valid if every formula is true in every interpretation. For example, theories of arithmetic such asPeano arithmeticare satisfiable because they are true in the natural numbers. This concept is closely related to theconsistencyof a theory, and in fact is equivalent to consistency for first-order logic, a result known asGödel's completeness theorem. The negation of satisfiability is unsatisfiability, and the negation of validity is invalidity. These four concepts are related to each other in a manner exactly analogous toAristotle'ssquare of opposition. Theproblemof determining whether a formula inpropositional logicis satisfiable isdecidable, and is known as theBoolean satisfiability problem, or SAT. In general, the problem of determining whether a sentence offirst-order logicis satisfiable is not decidable. Inuniversal algebra,equational theory, andautomated theorem proving, the methods ofterm rewriting,congruence closureandunificationare used to attempt to decide satisfiability. Whether a particulartheoryis decidable or not depends whether the theory isvariable-freeand on other conditions.[2] Forclassical logicswith negation, it is generally possible to re-express the question of the validity of a formula to one involving satisfiability, because of the relationships between the concepts expressed in the above square of opposition. In particular φ is valid if and only if ¬φ is unsatisfiable, which is to say it is false that ¬φ is satisfiable. Put another way, φ is satisfiable if and only if ¬φ is invalid. For logics without negation, such as thepositive propositional calculus, the questions of validity and satisfiability may be unrelated. In the case of thepositive propositional calculus, the satisfiability problem is trivial, as every formula is satisfiable, while the validity problem isco-NP complete. In the case ofclassical propositional logic, satisfiability is decidable for propositional formulae. In particular, satisfiability is anNP-completeproblem, and is one of the most intensively studied problems incomputational complexity theory. Forfirst-order logic(FOL), satisfiability isundecidable. More specifically, it is aco-RE-completeproblem and therefore notsemidecidable.[3]This fact has to do with the undecidability of the validity problem for FOL. The question of the status of the validity problem was posed firstly byDavid Hilbert, as the so-calledEntscheidungsproblem. The universal validity of a formula is a semi-decidable problem byGödel's completeness theorem. If satisfiability were also a semi-decidable problem, then the problem of the existence of counter-models would be too (a formula has counter-models iff its negation is satisfiable). So the problem of logical validity would be decidable, which contradicts theChurch–Turing theorem, a result stating the negative answer for the Entscheidungsproblem. Inmodel theory, anatomic formulais satisfiable if there is a collection of elements of astructurethat render the formula true.[4]IfAis a structure, φ is a formula, andais a collection of elements, taken from the structure, that satisfy φ, then it is commonly written that If φ has no free variables, that is, if φ is anatomic sentence, and it is satisfied byA, then one writes In this case, one may also say thatAis a model for φ, or that φ istrueinA. IfTis a collection of atomic sentences (a theory) satisfied byA, one writes A problem related to satisfiability is that offinite satisfiability, which is the question of determining whether a formula admits afinitemodel that makes it true. For a logic that has thefinite model property, the problems of satisfiability and finite satisfiability coincide, as a formula of that logic has a model if and only if it has a finite model. This question is important in the mathematical field offinite model theory. Finite satisfiability and satisfiability need not coincide in general. For instance, consider thefirst-order logicformula obtained as theconjunctionof the following sentences, wherea0{\displaystyle a_{0}}anda1{\displaystyle a_{1}}areconstants: The resulting formula has the infinite modelR(a0,a1),R(a1,a2),…{\displaystyle R(a_{0},a_{1}),R(a_{1},a_{2}),\ldots }, but it can be shown that it has no finite model (starting at the factR(a0,a1){\displaystyle R(a_{0},a_{1})}and following the chain ofR{\displaystyle R}atomsthat must exist by the second axiom, the finiteness of a model would require the existence of a loop, which would violate the third and fourth axioms, whether it loops back ona0{\displaystyle a_{0}}or on a different element). Thecomputational complexityof deciding satisfiability for an input formula in a given logic may differ from that of deciding finite satisfiability; in fact, for some logics, only one of them isdecidable. For classicalfirst-order logic, finite satisfiability isrecursively enumerable(in classRE) andundecidablebyTrakhtenbrot's theoremapplied to the negation of the formula. Numerical constraints[clarify]often appear in the field ofmathematical optimization, where one usually wants to maximize (or minimize) an objective function subject to some constraints. However, leaving aside the objective function, the basic issue of simply deciding whether the constraints are satisfiable can be challenging or undecidable in some settings. The following table summarizes the main cases. Table source: Bockmayr and Weispfenning.[5]: 754 For linear constraints, a fuller picture is provided by the following table. Table source: Bockmayr and Weispfenning.[5]: 755
https://en.wikipedia.org/wiki/Satisfiability_problem
Constraint satisfaction problems(CSPs) are mathematical questions defined as a set of objects whosestatemust satisfy a number ofconstraintsorlimitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints overvariables, which is solved byconstraint satisfactionmethods. CSPs are the subject of research in bothartificial intelligenceandoperations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families.CSPs often exhibit high complexity, requiring a combination ofheuristicsandcombinatorial searchmethods to be solved in a reasonable time.Constraint programming(CP) is the field of research that specifically focuses on tackling these kinds of problems.[1][2]Additionally, theBoolean satisfiability problem(SAT),satisfiability modulo theories(SMT),mixed integer programming(MIP) andanswer set programming(ASP) are all fields of research focusing on the resolution of particular forms of the constraint satisfaction problem. Examples of problems that can be modeled as a constraint satisfaction problem include: These are often provided with tutorials ofCP, ASP, Boolean SAT and SMT solvers. In the general case, constraint problems can be much harder, and may not be expressible in some of these simpler systems. "Real life" examples includeautomated planning,[6][7]lexical disambiguation,[8][9]musicology,[10]product configuration[11]andresource allocation.[12] The existence of a solution to a CSP can be viewed as adecision problem. This can be decided by finding a solution, or failing to find a solution after exhaustive search (stochastic algorithmstypically never reach an exhaustive conclusion, while directed searches often do, on sufficiently small problems). In some cases the CSP might be known to have solutions beforehand, through some other mathematical inference process. Formally, a constraint satisfaction problem is defined as a triple⟨X,D,C⟩{\displaystyle \langle X,D,C\rangle }, where[13] Each variableXi{\displaystyle X_{i}}can take on the values in the nonempty domainDi{\displaystyle D_{i}}. Every constraintCj∈C{\displaystyle C_{j}\in C}is in turn a pair⟨tj,Rj⟩{\displaystyle \langle t_{j},R_{j}\rangle }, wheretj⊆{1,2,…,n}{\displaystyle t_{j}\subseteq \{1,2,\ldots ,n\}}is a set ofk{\displaystyle k}indices andRj{\displaystyle R_{j}}is ak{\displaystyle k}-aryrelationon the corresponding product of domains×i∈tjDi{\displaystyle \times _{i\in t_{j}}D_{i}}where the product is taken with indices in ascending order. Anevaluationof the variables is a function from a subset of variables to a particular set of values in the corresponding subset of domains. An evaluationv{\displaystyle v}satisfies a constraint⟨tj,Rj⟩{\displaystyle \langle t_{j},R_{j}\rangle }if the values assigned to the variablestj{\displaystyle t_{j}}satisfy the relationRj{\displaystyle R_{j}}. An evaluation isconsistentif it does not violate any of the constraints. An evaluation iscompleteif it includes all variables. An evaluation is asolutionif it is consistent and complete; such an evaluation is said tosolvethe constraint satisfaction problem. Constraint satisfaction problems on finite domains are typically solved using a form ofsearch. The most used techniques are variants ofbacktracking,constraint propagation, andlocal search. These techniques are also often combined, as in theVLNSmethod, and current research involves other technologies such aslinear programming.[14] Backtrackingis a recursive algorithm. It maintains a partial assignment of the variables. Initially, all variables are unassigned. At each step, a variable is chosen, and all possible values are assigned to it in turn. For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, arecursivecall is performed. When all values have been tried, the algorithm backtracks. In this basic backtracking algorithm, consistency is defined as the satisfaction of all constraints whose variables are all assigned. Several variants of backtracking exist.Backmarkingimproves the efficiency of checking consistency.Backjumpingallows saving part of the search by backtracking "more than one variable" in some cases.Constraint learninginfers and saves new constraints that can be later used to avoid part of the search.Look-aheadis also often used in backtracking to attempt to foresee the effects of choosing a variable or a value, thus sometimes determining in advance when a subproblem is satisfiable or unsatisfiable. Constraint propagationtechniques are methods used to modify a constraint satisfaction problem. More precisely, they are methods that enforce a form oflocal consistency, which are conditions related to the consistency of a group of variables and/or constraints. Constraint propagation has various uses. First, it turns a problem into one that is equivalent but is usually simpler to solve. Second, it may prove satisfiability or unsatisfiability of problems. This is not guaranteed to happen in general; however, it always happens for some forms of constraint propagation and/or for certain kinds of problems. The most known and used forms of local consistency arearc consistency,hyper-arc consistency, andpath consistency. The most popular constraint propagation method is theAC-3 algorithm, which enforces arc consistency. Local searchmethods are incomplete satisfiability algorithms. They may find a solution of a problem, but they may fail even if the problem is satisfiable. They work by iteratively improving a complete assignment over the variables. At each step, a small number of variables are changed in value, with the overall aim of increasing the number of constraints satisfied by this assignment. Themin-conflicts algorithmis a local search algorithm specific for CSPs and is based on that principle. In practice, local search appears to work well when these changes are also affected by random choices. An integration of search with local search has been developed, leading tohybrid algorithms. CSPs are also studied incomputational complexity theory,finite model theoryanduniversal algebra. It turned out that questions about the complexity of CSPs translate into important universal-algebraic questions about underlying algebras. This approach is known as thealgebraic approachto CSPs.[15] Since every computational decision problem ispolynomial-time equivalentto a CSP with an infinite template,[16]general CSPs can have arbitrary complexity. In particular, there are also CSPs within the class ofNP-intermediateproblems, whose existence was demonstrated byLadner, under the assumption thatP ≠ NP. However, a large class of CSPs arising from natural applications satisfy a complexity dichotomy, meaning that every CSP within that class is either inPorNP-triology math complete. These CSPs thus provide one of the largest known subsets ofNPwhich avoidsNP-intermediateproblems. A complexity dichotomy was first proven bySchaeferfor Boolean CSPs, i.e. CSPs over a 2-element domain and where all the available relations areBoolean operators. This result has been generalized for various classes of CSPs, most notably for all CSPs over finite domains. Thisfinite-domain dichotomy conjecturewas first formulated by Tomás Feder and Moshe Vardi,[17]and finally proven independently by Andrei Bulatov[18]and Dmitriy Zhuk in 2017.[19] Other classes for which a complexity dichotomy has been confirmed are Most classes of CSPs that are known to be tractable are those where thehypergraphof constraints has boundedtreewidth,[27]or where the constraints have arbitrary form but there exist equationally non-trivial polymorphisms of the set of constraint relations.[28] Aninfinite-domain dichotomy conjecture[29]has been formulated for all CSPs of reducts of finitely bounded homogenous structures, stating that the CSP of such a structure is in P if and only if itspolymorphism cloneis equationally non-trivial, and NP-hard otherwise. The complexity of such infinite-domain CSPs as well as of other generalisations (Valued CSPs, Quantified CSPs, Promise CSPs) is still an area of active research.[30][1][2] Every CSP can also be considered as aconjunctive querycontainment problem.[31] A similar situation exists between the functional classesFPand#P. By a generalization ofLadner's theorem, there are also problems in neither FP nor#P-completeas long as FP ≠ #P. As in the decision case, a problem in the #CSP is defined by a set of relations. Each problem takes aBooleanformula as input and the task is to compute the number of satisfying assignments. This can be further generalized by using larger domain sizes and attaching a weight to each satisfying assignment and computing the sum of these weights. It is known that any complex weighted #CSP problem is either in FP or #P-hard.[32] The classic model of Constraint Satisfaction Problem defines a model of static, inflexible constraints. This rigid model is a shortcoming that makes it difficult to represent problems easily.[33]Several modifications of the basic CSP definition have been proposed to adapt the model to a wide variety of problems. Dynamic CSPs[34](DCSPs) are useful when the original formulation of a problem is altered in some way, typically because the set of constraints to consider evolves because of the environment.[35]DCSPs are viewed as a sequence of static CSPs, each one a transformation of the previous one in which variables and constraints can be added (restriction) or removed (relaxation). Information found in the initial formulations of the problem can be used to refine the next ones. The solving method can be classified according to the way in which information is transferred: Classic CSPs treat constraints as hard, meaning that they areimperative(each solution must satisfy all of them) andinflexible(in the sense that they must be completely satisfied or else they are completely violated).Flexible CSPs relax those assumptions, partiallyrelaxingthe constraints and allowing the solution to not comply with all of them. This is similar to preferences inpreference-based planning. Some types of flexible CSPs include: In DCSPs[36]each constraint variable is thought of as having a separate geographic location. Strong constraints are placed on information exchange between variables, requiring the use of fully distributed algorithms to solve the constraint satisfaction problem.
https://en.wikipedia.org/wiki/Constraint_satisfaction_problem
Inmathematics, aconstraintis a condition of anoptimizationproblem that the solution must satisfy. There are several types of constraints—primarilyequalityconstraints,inequalityconstraints, andinteger constraints. The set ofcandidate solutionsthat satisfy all constraints is called thefeasible set.[1] The following is a simple optimization problem: subject to and wherex{\displaystyle \mathbf {x} }denotes the vector (x1,x2). In this example, the first line defines the function to be minimized (called theobjective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints arehard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions. Without the constraints, the solution would be (0,0), wheref(x){\displaystyle f(\mathbf {x} )}has the lowest value. But this solution does not satisfy the constraints. The solution of theconstrained optimizationproblem stated above isx=(1,1){\displaystyle \mathbf {x} =(1,1)}, which is the point with the smallest value off(x){\displaystyle f(\mathbf {x} )}that satisfies the two constraints. If the problem mandates that the constraints be satisfied, as in the above discussion, the constraints are sometimes referred to ashard constraints. However, in some problems, calledflexible constraint satisfaction problems, it is preferred but not required that certain constraints be satisfied; such non-mandatory constraints are known assoft constraints. Soft constraints arise in, for example,preference-based planning. In aMAX-CSPproblem, a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints. Global constraints[2]are constraints representing a specific relation on a number of variables, taken altogether. Some of them, such as thealldifferentconstraint, can be rewritten as a conjunction of atomic constraints in a simpler language: thealldifferentconstraint holds onnvariablesx1...xn{\displaystyle x_{1}...x_{n}}, and is satisfied if the variables take values which are pairwise different. It is semantically equivalent to the conjunction of inequalitiesx1≠x2,x1≠x3...,x2≠x3,x2≠x4...xn−1≠xn{\displaystyle x_{1}\neq x_{2},x_{1}\neq x_{3}...,x_{2}\neq x_{3},x_{2}\neq x_{4}...x_{n-1}\neq x_{n}}. Other global constraints extend the expressivity of the constraint framework. In this case, they usually capture a typical structure of combinatorial problems. For instance, theregularconstraint expresses that a sequence of variables is accepted by adeterministic finite automaton. Global constraints are used[3]to simplify the modeling ofconstraint satisfaction problems, to extend the expressivity of constraint languages, and also to improve theconstraint resolution: indeed, by considering the variables altogether, infeasible situations can be seen earlier in the solving process. Many of the global constraints are referenced into anonline catalog.
https://en.wikipedia.org/wiki/Constraint_(mathematics)
Inmathematical optimizationandcomputer science, afeasible region,feasible set,orsolution spaceis the set of all possible points (sets of values of the choice variables) of anoptimization problemthat satisfy the problem'sconstraints, potentially includinginequalities,equalities, andintegerconstraints.[1]This is the initial set ofcandidate solutionsto the problem, before the set of candidates has been narrowed down. For example, consider the problem of minimizing the functionx2+y4{\displaystyle x^{2}+y^{4}}with respect to the variablesx{\displaystyle x}andy,{\displaystyle y,}subject to1≤x≤10{\displaystyle 1\leq x\leq 10}and5≤y≤12.{\displaystyle 5\leq y\leq 12.\,}Here the feasible set is the set of pairs (x,y) in which the value ofxis at least 1 and at most 10 and the value ofyis at least 5 and at most 12. The feasible set of the problem is separate from theobjective function, which states the criterion to be optimized and which in the above example isx2+y4.{\displaystyle x^{2}+y^{4}.} In many problems, the feasible set reflects a constraint that one or more variables must be non-negative. In pureinteger programmingproblems, the feasible set is the set of integers (or some subset thereof). Inlinear programmingproblems, the feasible set is aconvexpolytope: a region inmultidimensional spacewhose boundaries are formed byhyperplanesand whose corners arevertices. Constraint satisfactionis the process of finding a point in the feasible region. Aconvexfeasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has aconvex objective functionthat is to be minimized, it will generally be easier to solve in the presence of a convex feasible set and anylocal optimumwill also be aglobal optimum. If the constraints of an optimization problem are mutually contradictory, there are no points that satisfy all the constraints and thus the feasible region is theempty set. In this case the problem has no solution and is said to beinfeasible. Feasible sets may bebounded or unbounded. For example, the feasible set defined by the constraint set {x≥ 0,y≥ 0} is unbounded because in some directions there is no limit on how far one can go and still be in the feasible region. In contrast, the feasible set formed by the constraint set {x≥ 0,y≥ 0,x+ 2y≤ 4} is bounded because the extent of movement in any direction is limited by the constraints. In linear programming problems withnvariables, anecessary but insufficient conditionfor the feasible set to be bounded is that the number of constraints be at leastn+ 1 (as illustrated by the above example). If the feasible set is unbounded, there may or may not be an optimum, depending on the specifics of the objective function. For example, if the feasible region is defined by the constraint set {x≥ 0,y≥ 0}, then the problem of maximizingx+yhas no optimum since any candidate solution can be improved upon by increasingxory; yet if the problem is tominimizex+y, then there is an optimum (specifically at (x,y) = (0, 0)). Inoptimizationand other branches ofmathematics, and insearch algorithms(a topic incomputer science), acandidate solutionis amemberof thesetof possible solutions in the feasible region of a given problem.[2]A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies allconstraints; that is, it is in the set offeasible solutions. Algorithms for solving various types of optimization problems often narrow the set of candidate solutions down to a subset of the feasible solutions, whose points remain as candidate solutions while the other feasible solutions are henceforth excluded as candidates. The space of all candidate solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space, or solution space.[2]This is the set of all possible solutions that satisfy the problem's constraints.Constraint satisfactionis the process of finding a point in the feasible set. In the case of thegenetic algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm.[3] In calculus, an optimal solution is sought using thefirst derivative test: thefirst derivativeof the function being optimized is equated to zero, and any values of the choice variable(s) that satisfy this equation are viewed as candidate solutions (while those that do not are ruled out as candidates). There are several ways in which a candidate solution might not be an actual solution. First, it might give a minimum when a maximum is being sought (or vice versa), and second, it might give neither a minimum nor a maximum but rather asaddle pointor aninflection point, at which a temporary pause in the local rise or fall of the function occurs. Such candidate solutions may be able to be ruled out by use of thesecond derivative test, the satisfaction of which is sufficient for the candidate solution to be at least locally optimal. Third, a candidate solution may be alocal optimumbut not aglobal optimum. In takingantiderivativesofmonomialsof the formxn,{\displaystyle x^{n},}the candidate solution usingCavalieri's quadrature formulawould be1n+1xn+1+C.{\displaystyle {\tfrac {1}{n+1}}x^{n+1}+C.}This candidate solution is in fact correct except whenn=−1.{\displaystyle n=-1.} In thesimplex methodfor solvinglinear programmingproblems, avertexof the feasiblepolytopeis selected as the initial candidate solution and is tested for optimality; if it is rejected as the optimum, an adjacent vertex is considered as the next candidate solution. This process is continued until a candidate solution is found to be the optimum.
https://en.wikipedia.org/wiki/Candidate_solution
Decision theoryor thetheory of rational choiceis a branch ofprobability,economics, andanalytic philosophythat usesexpected utilityandprobabilityto model how individuals would behaverationallyunderuncertainty.[1][2]It differs from thecognitiveandbehavioral sciencesin that it is mainlyprescriptiveand concerned with identifyingoptimal decisionsfor arational agent, rather thandescribinghow people actually make decisions. Despite this, the field is important to the study of real human behavior bysocial scientists, as it lays the foundations tomathematically modeland analyze individuals in fields such associology,economics,criminology,cognitive science,moral philosophyandpolitical science.[citation needed] The roots of decision theory lie inprobability theory, developed byBlaise PascalandPierre de Fermatin the 17th century, which was later refined by others likeChristiaan Huygens. These developments provided a framework for understanding risk and uncertainty, which are central todecision-making. In the 18th century,Daniel Bernoulliintroduced the concept of "expected utility" in the context of gambling, which was later formalized byJohn von NeumannandOskar Morgensternin the 1940s. Their work onGame Theory and Expected Utility Theoryhelped establish a rational basis for decision-making under uncertainty. AfterWorld War II, decision theory expanded into economics, particularly with the work of economists likeMilton Friedmanand others, who applied it to market behavior and consumer choice theory. This era also saw the development ofBayesian decision theory, which incorporatesBayesian probabilityinto decision-making models. By the late 20th century, scholars likeDaniel KahnemanandAmos Tverskychallenged the assumptions of rational decision-making. Their work inbehavioral economicshighlightedcognitive biasesandheuristicsthat influence real-world decisions, leading to the development ofprospect theory, which modified expected utility theory by accounting for psychological factors. Normative decision theory is concerned with identification of optimal decisions where optimality is often determined by considering an ideal decision maker who is able to calculate with perfect accuracy and is in some sense fullyrational. The practical application of this prescriptive approach (how peopleought tomake decisions) is calleddecision analysisand is aimed at finding tools, methodologies, and software (decision support systems) to help people make better decisions.[3][4] In contrast, descriptive decision theory is concerned with describing observed behaviors often under the assumption that those making decisions are behaving under some consistent rules. These rules may, for instance, have a procedural framework (e.g.Amos Tversky's elimination by aspects model) or anaxiomaticframework (e.g.stochastic transitivityaxioms), reconciling theVon Neumann-Morgenstern axiomswith behavioral violations of theexpected utilityhypothesis, or they may explicitly give a functional form fortime-inconsistentutility functions(e.g. Laibson'squasi-hyperbolic discounting).[3][4] Prescriptive decision theory is concerned with predictions about behavior that positive decision theory produces to allow for further tests of the kind of decision-making that occurs in practice. In recent decades, there has also been increasing interest in "behavioral decision theory", contributing to a re-evaluation of what useful decision-making requires.[5][6] The area of choice under uncertainty represents the heart of decision theory. Known from the 17th century (Blaise Pascalinvoked it in hisfamous wager, which is contained in hisPensées, published in 1670), the idea ofexpected valueis that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an "expected value", or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value. In 1738,Daniel Bernoullipublished an influential paper entitledExposition of a New Theory on the Measurement of Risk, in which he uses theSt. Petersburg paradoxto show that expected value theory must benormativelywrong. He gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St. Petersburg in winter. In his solution, he defines autility functionand computesexpected utilityrather than expected financial value.[7] In the 20th century, interest was reignited byAbraham Wald's1939 paper pointing out that the two central procedures ofsampling-distribution-basedstatistical-theory, namelyhypothesis testingandparameter estimation, are special cases of the general decision problem.[8]Wald's paper renewed and synthesized many concepts of statistical theory, includingloss functions,risk functions,admissible decision rules,antecedent distributions,Bayesian procedures, andminimaxprocedures. The phrase "decision theory" itself was used in 1950 byE. L. Lehmann.[9] The revival ofsubjective probabilitytheory, from the work ofFrank Ramsey,Bruno de Finetti,Leonard Savageand others, extended the scope of expected utility theory to situations where subjective probabilities can be used. At the time, von Neumann and Morgenstern's theory ofexpected utility[10]proved that expected utility maximization followed from basic postulates about rational behavior. The work ofMaurice AllaisandDaniel Ellsbergshowed that human behavior has systematic and sometimes important departures from expected-utility maximization (Allais paradoxandEllsberg paradox).[11]Theprospect theoryofDaniel KahnemanandAmos Tverskyrenewed the empirical study ofeconomic behaviorwith less emphasis on rationality presuppositions. It describes a way by which people make decisions when all of the outcomes carry a risk.[12]Kahneman and Tversky found three regularities – in actual human decision-making, "losses loom larger than gains"; people focus more onchangesin their utility-states than they focus on absolute utilities; and the estimation of subjective probabilities is severely biased byanchoring. Intertemporal choice is concerned with the kind of choice where different actions lead to outcomes that are realized at different stages over time.[13]It is also described ascost-benefitdecision making since it involves the choices between rewards that vary according to magnitude and time of arrival.[14]If someone received a windfall of several thousand dollars, they could spend it on an expensive holiday, giving them immediate pleasure, or they could invest it in a pension scheme, giving them an income at some time in the future. What is the optimal thing to do? The answer depends partly on factors such as the expectedrates of interestandinflation, the person'slife expectancy, and their confidence in the pensions industry. However even with all those factors taken into account, human behavior again deviates greatly from the predictions of prescriptive decision theory, leading to alternative models in which, for example, objective interest rates are replaced bysubjective discount rates.[citation needed] Some decisions are difficult because of the need to take into account how other people in the situation will respond to the decision that is taken. The analysis of such social decisions is often treated under decision theory, though it involves mathematical methods. In the emerging field ofsocio-cognitiveengineering, the research is especially focused on the different types of distributed decision-making in human organizations, in normal and abnormal/emergency/crisis situations.[15] Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, or the complexity of the organization that has to make them. Individuals making decisions are limited in resources (i.e. time and intelligence) and are thereforeboundedly rational; the issue is thus, more than the deviation between real and optimal behavior, the difficulty of determining the optimal behavior in the first place. Decisions are also affected by whether options are framed together or separately; this is known as thedistinction bias.[citation needed] Heuristicsare procedures for making a decision without working out the consequences of every option. Heuristics decrease the amount of evaluative thinking required for decisions, focusing on some aspects of the decision while ignoring others.[16]While quicker than step-by-step processing, heuristic thinking is also more likely to involve fallacies or inaccuracies.[17] One example of a common and erroneous thought process that arises through heuristic thinking is thegambler's fallacy— believing that an isolated random event is affected by previous isolated random events. For example, if flips of a fair coin give repeated tails, the coin still has the same probability (i.e., 0.5) of tails in future turns, though intuitively it might seems that heads becomes more likely.[18]In the long run, heads and tails should occur equally often; people commit the gambler's fallacy when they use this heuristic to predict that a result of heads is "due" after a run of tails.[19]Another example is that decision-makers may be biased towards preferring moderate alternatives to extreme ones. Thecompromise effectoperates under a mindset that the most moderate option carries the most benefit. In an incomplete information scenario, as in most daily decisions, the moderate option will look more appealing than either extreme, independent of the context, based only on the fact that it has characteristics that can be found at either extreme.[20] A highly controversial issue is whether one can replace the use of probability in decision theory with something else. Advocates for the use of probability theory point to: The proponents offuzzy logic,possibility theory,Dempster–Shafer theory, andinfo-gap decision theorymaintain that probability is only one of many alternatives and point to many examples where non-standard alternatives have been implemented with apparent success. Notably, probabilistic decision theory can sometimes besensitiveto assumptions about the probabilities of various events, whereas non-probabilistic rules, such asminimax, arerobustin that they do not make such assumptions. A general criticism of decision theory based on a fixed universe of possibilities is that it considers the "known unknowns", not the "unknown unknowns":[21]it focuses on expected variations, not on unforeseen events, which some argue have outsized impact and must be considered – significant events may be "outside model". This[which?]line of argument, called theludic fallacy, is that there are inevitable imperfections in modeling the real world by particular models, and that unquestioning reliance on models blinds one to their limits.
https://en.wikipedia.org/wiki/Decision_theory
Knowledge-based configuration, also referred to asproduct configurationorproduct customization, is an activity ofcustomisinga product to meet the needs of a particular customer. The product in question may consist of mechanical parts, services, and software. Knowledge-based configuration is a major application area forartificial intelligence(AI), and it is based on modelling of the configurations in a manner that allows the utilisation of AI techniques for searching for a valid configuration to meet the needs of a particular customer.[A 1][A 2][A 3][A 4][A 5][B 1][B 2][B 3] Knowledge-based configuration (of complex products and services) has a long history as anartificial intelligenceapplication area, see, e.g.[B 1][A 1][A 6][A 7][A 8][A 9][A 10][A 11]Informally, configuration can be defined as a "special case of design activity, where the artifact being configured is assembled from instances of a fixed set of well-defined component types which can be composed conforming to a set of constraints".[A 2]Such constraints[B 4]represent technical restrictions, restrictions related to economic aspects, and conditions related to production processes. The result of a configuration process is a product configuration (concrete configuration), i.e., a list of instances and in some cases also connections between these instances. Examples of such configurations are computers to be delivered or financial service portfolio offers (e.g., a combination of loan and corresponding risk insurance). Numerous practical configuration problems can be analyzed by the theoretical framework of Najmann and Stein,[A 12]an early axiomatic approach that does not presuppose any particularknowledge representationformalism. One important result of this methodology is that typical optimization problems (e.g. finding a cost-minimal configuration) areNP-complete. Thus they require (potentially) excessive computation time, makingheuristicconfiguration algorithms the preferred choice for complex artifacts (products, services). Configuration systems[B 1][A 1][A 2], also referred to asconfiguratorsormass customization toolkits,[A 13]are one of the most successfully appliedartificial intelligencetechnologies. Examples are the automotive industry,[A 9]the telecommunication industry,[A 7]the computer industry,[A 6][A 14]and power electric transformers.[A 8]Starting withrule-basedapproaches such as R1/XCON,[A 6]model-based representations of knowledge (in contrast to rule-based representations) have been developed that strictly separate product domain knowledge from problem solving knowledge—examples thereof are theconstraint satisfaction problem, theBoolean satisfiability problem, and differentanswer set programming(ASP) representations. There are two commonly cited conceptualizations of configuration knowledge.[A 3][A 4]The most important concepts in these are components, ports, resources and functions. This separation of product domain knowledge and problem solving knowledge increased the effectiveness of configuration application development and maintenance,[A 7][A 9][A 10][A 15]since changes in the product domain knowledge do not affect search strategies and vice versa. Configurators are also often considered as "open innovationtoolkits", i.e., tools that support customers in the product identification phase.[A 16]In this context customers are innovators who articulate their requirements leading to new innovative products.[A 16][A 17][A 18]"Mass Confusion"[A 19]– the overwhelming of customers by a large number of possible solution alternatives (choices) – is a phenomenon that often comes with the application of configuration technologies. This phenomenon motivated the creation of personalized configuration environments taking into account a customer's knowledge and preferences.[A 20][A 21] Core configuration, i.e., guiding the user and checking the consistency of user requirements with the knowledge base, solution presentation and translation of configuration results intobill of materials(BOM) are major tasks to be supported by a configurator.[A 22][B 5][A 5][A 13][A 23]Configuration knowledge bases are often built using proprietary languages.[A 10][A 20][A 24]In most cases knowledge bases are developed by knowledge engineers who elicit product, marketing and sales knowledge from domain experts. Configuration knowledge bases are composed of a formal description of the structure of the product and further constraints restricting the possible feature and component combinations. Configurators known ascharacteristic based product configuratorsuse sets of discrete variables that are either binary or have one of several values, and these variables define every possible product variation. Recently, knowledge-based configuration has been extended to service and software configuration. Modeling software configuration has been based on two main approaches: feature modeling,[A 25][B 6]and component-connectors.[A 26]Kumbangdomain ontologycombines the previous approaches building on the tradition of knowledge-based configuration.[A 27]
https://en.wikipedia.org/wiki/Knowledge-based_configuration
Inmathematical logic,computational complexity theory, andcomputer science, theexistential theory of the realsis the set of all true sentences of the form∃X1⋯∃XnF(X1,…,Xn),{\displaystyle \exists X_{1}\cdots \exists X_{n}\,F(X_{1},\dots ,X_{n}),}where the variablesXi{\displaystyle X_{i}}are interpreted as havingreal numbervalues, and whereF(X1,…Xn){\displaystyle F(X_{1},\dots X_{n})}is aquantifier-free formulainvolving equalities and inequalities of realpolynomials. A sentence of this form is true if it is possible to find values for all of the variables that, when substituted into formulaF{\displaystyle F}, make it become true.[1] Thedecision problemfor the existential theory of the reals is the problem of finding analgorithmthat decides, for each such sentence, whether it is true or false. Equivalently, it is the problem of testing whether a givensemialgebraic setis non-empty.[1]This decision problem isNP-hardand lies inPSPACE,[2]giving it significantly lower complexity thanAlfred Tarski'squantifier eliminationprocedure for deciding statements in thefirst-order theory of the realswithout the restriction to existential quantifiers.[1]However, in practice, general methods for the first-order theory remain the preferred choice for solving these problems.[3] Thecomplexity class∃R{\displaystyle \exists \mathbb {R} }has been defined to describe the class of computational problems that may be translated into equivalent sentences of this form. Instructural complexity theory, it lies betweenNPand PSPACE. Many natural problems ingeometric graph theory, especially problems of recognizing geometricintersection graphsand straightening the edges ofgraph drawingswithcrossings, belong to∃R{\displaystyle \exists \mathbb {R} }, and arecompletefor this class. Here, completeness means that there exists a translation in the reverse direction, from an arbitrary sentence over the reals into an equivalent instance of the given problem.[4] Inmathematical logic, atheoryis aformal languageconsisting of a set ofsentenceswritten using a fixed set of symbols. Thefirst-order theory of real closed fieldshas the following symbols:[5] A sequence of these symbols forms a sentence that belongs to the first-order theory of the reals if it is grammatically well formed, all its variables are properly quantified, and (when interpreted as a mathematical statement about thereal numbers) it is a true statement. As Tarski showed, this theory can be described by anaxiom schemaand a decision procedure that is complete and effective: for every fully quantified and grammatical sentence, either the sentence or its negation (but not both) can be derived from the axioms. The same theory describes everyreal closed field, not just the real numbers.[6]However, there are other number systems that are not accurately described by these axioms; in particular, the theory defined in the same way forintegersinstead of real numbers isundecidable, even for existential sentences (Diophantine equations) byMatiyasevich's theorem.[5][7] The existential theory of the reals is thefragmentof the first-order theory consisting of sentences in which all the quantifiers are existential and they appear before any of the other symbols. That is, it is the set of all true sentences of the form∃X1⋯∃XnF(X1,…,Xn),{\displaystyle \exists X_{1}\cdots \exists X_{n}\,F(X_{1},\dots ,X_{n}),}whereF(X1,…Xn){\displaystyle F(X_{1},\dots X_{n})}is aquantifier-free formulainvolving equalities and inequalities ofrealpolynomials. Thedecision problemfor the existential theory of the reals is the algorithmic problem of testing whether a given sentence belongs to this theory; equivalently, for strings that pass the basic syntactical checks (they use the correct symbols with the correct syntax, and have no unquantified variables) it is the problem of testing whether the sentence is a true statement about the real numbers. The set ofn{\displaystyle n}-tuples of real numbers(X1,…Xn){\displaystyle (X_{1},\dots X_{n})}for whichF(X1,…Xn){\displaystyle F(X_{1},\dots X_{n})}is true is called asemialgebraic set, so the decision problem for the existential theory of the reals can equivalently be rephrased as testing whether a given semialgebraic set is nonempty.[1] In determining thetime complexityofalgorithmsfor the decision problem for the existential theory of the reals, it is important to have a measure of the size of the input. The simplest measure of this type is the length of a sentence: that is, the number of symbols it contains.[5]However, in order to achieve a more precise analysis of the behavior of algorithms for this problem, it is convenient to break down the input size into several variables, separating out the number of variables to be quantified, the number of polynomials within the sentence, and the degree of these polynomials.[8] Thegolden ratioφ{\displaystyle \varphi }may be defined as therootof thepolynomialx2−x−1{\displaystyle x^{2}-x-1}. This polynomial has two roots, only one of which (the golden ratio) is greater than one. Thus, the existence of the golden ratio may be expressed by the sentence∃X1(X1>1∧X1×X1−X1−1=0).{\displaystyle \exists X_{1}(X_{1}>1\wedge X_{1}\times X_{1}-X_{1}-1=0).}Because the golden ratio is nottranscendental, this is a true sentence, and belongs to the existential theory of the reals. The answer to the decision problem for the existential theory of the reals, given this sentence as input, is the Boolean value true. Theinequality of arithmetic and geometric meansstates that, for every two non-negative numbersx{\displaystyle x}andy{\displaystyle y}, the following inequality holds:x+y2≥xy.{\displaystyle {\frac {x+y}{2}}\geq {\sqrt {xy}}.}As stated above, it is a first-order sentence about the real numbers, but one with universal rather than existential quantifiers, and one that uses extra symbols for division, square roots, and the number 2 that are not allowed in the first-order theory of the reals. However, by squaring both sides it can be transformed into the following existential statement that can be interpreted as asking whether the inequality has any counterexamples: The answer to the decision problem for the existential theory of the reals, given this sentence as input, is the Boolean value false: there are no counterexamples. Therefore, this sentence does not belong to the existential theory of the reals, despite being of the correct grammatical form. Alfred Tarski's method ofquantifier elimination(1948) showed the existential theory of the reals (and more generally the first order theory of the reals) to be algorithmically solvable, but without anelementarybound on its complexity.[9][6]The method ofcylindrical algebraic decomposition, byGeorge E. Collins(1975), improved the time dependence todoubly exponential,[9][10]of the formL3(md)2O(n){\displaystyle L^{3}(md)^{2^{O(n)}}}whereL{\displaystyle L}is the number of bits needed to represent the coefficients in the sentence whose value is to be determined,m{\displaystyle m}is the number of polynomials in the sentence,d{\displaystyle d}is their total degree, andn{\displaystyle n}is the number of variables.[8]By 1988,Dima Grigorievand Nicolai Vorobjov had shown the complexity to be exponential in a polynomial ofn{\displaystyle n},[8][11][12]L(md)n2{\displaystyle L(md)^{n^{2}}}and in a sequence of papers published in 1992 James Renegar improved this to a singly exponential dependenceonn{\displaystyle n},[8][13][14][15]Llog⁡Llog⁡log⁡L(md)O(n).{\displaystyle L\log L\log \log L(md)^{O(n)}.}In the meantime, in 1988,John Cannydescribed another algorithm that also has exponential time dependence, but only polynomial space complexity; that is, he showed that the problem could be solved inPSPACE.[2][9] Theasymptotic computational complexityof these algorithms may be misleading, because in practice they can only be run on inputs of very small size. In a 1991 comparison, Hoon Hong estimated that Collins' doubly exponential procedure would be able to solve a problem whose size is described by setting all the above parameters to 2, in less than a second, whereas the algorithms of Grigoriev, Vorbjov, and Renegar would instead take more than a million years.[8]In 1993,Joos,Roy, and Solernó suggested that it should be possible to make small modifications to the exponential-time procedures to make them faster in practice than cylindrical algebraic decision, as well as faster in theory.[16]However, as of 2009, it was still the case that general methods for the first-order theory of the reals remained superior in practice to the singly exponential algorithms specialized to the existential theory of the reals.[3] Several problems in computational complexity andgeometric graph theorymay be classified ascompletefor the existential theory of the reals. That is, every problem in the existential theory of the reals has apolynomial-time many-one reductionto an instance of one of these problems, and in turn these problems are reducible to the existential theory of the reals.[4][17] A number of problems of this type concern the recognition ofintersection graphsof a certain type. In these problems, the input is anundirected graph; the goal is to determine whether geometric shapes from a certain class of shapes can be associated with the vertices of the graph in such a way that two vertices are adjacent in the graph if and only if their associated shapes have a non-empty intersection. Problems of this type that are complete for the existential theory of the reals include recognition ofintersection graphsofline segmentsin the plane,[4][18][5]recognition ofunit disk graphs,[19]and recognition of intersection graphs of convex sets in the plane.[4] For graphs drawn in the plane without crossings,Fáry's theoremstates that one gets the same class ofplanar graphsregardless of whether the edges of the graph are drawn as straight line segments or as arbitrary curves. But this equivalence does not hold true for other types of drawing. For instance, although thecrossing numberof a graph (the minimum number of crossings in a drawing with arbitrarily curved edges) may be determined in NP, it is complete for the existential theory of the reals to determine whether there exists a drawing achieving a given bound on the rectilinear crossing number (the minimum number of pairs of edges that cross in any drawing with edges drawn as straight line segments in the plane).[4][20]It is also complete for the existential theory of the reals to test whether a given graph can be drawn in the plane with straight line edges and with a given set of edge pairs as its crossings, or equivalently, whether a curved drawing with crossings can be straightened in a way that preserves its crossings.[21] Other complete problems for the existential theory of the reals include: Based on this, thecomplexity class∃R{\displaystyle \exists \mathbb {R} }has been defined as the set of problems having a polynomial-time many-one reduction to the existential theory of the reals.[4]
https://en.wikipedia.org/wiki/Existential_theory_of_the_reals#Complete_problems
Here are some of the more commonly known problems that arePSPACE-completewhen expressed asdecision problems. This list is in no way comprehensive. Generalizedversions of: Type inhabitation problemfor simply typed lambda calculus Integer circuitevaluation[24]
https://en.wikipedia.org/wiki/List_of_PSPACE-complete_problems
This is a list of feature films and documentaries that includemathematicians, scientists who use math or references to mathematicians. Films where mathematics is central to the plot: Biographical films based on real-life mathematicians: Films where one or more of the main characters are mathematicians, but that are not otherwise about mathematics: Films where one or more of the members of the main cast is a mathematician:
https://en.wikipedia.org/wiki/List_of_films_about_mathematicians
Incomputational complexity theory,P, also known asPTIMEorDTIME(nO(1)), is a fundamentalcomplexity class. It contains alldecision problemsthat can be solved by adeterministic Turing machineusing apolynomialamount ofcomputation time, orpolynomial time. Cobham's thesisholds that P is the class of computational problems that are "efficiently solvable" or "tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a usefulrule of thumb. AlanguageLis in P if and only if there exists a deterministic Turing machineM, such that P can also be viewed as a uniform family ofBoolean circuits. A languageLis in P if and only if there exists apolynomial-time uniformfamily of Boolean circuits{Cn:n∈N}{\displaystyle \{C_{n}:n\in \mathbb {N} \}}, such that The circuit definition can be weakened to use only alogspace uniformfamily without changing the complexity class. P is known to contain many natural problems, including the decision versions oflinear programming, and finding amaximum matching. In 2002, it was shown that the problem of determining if a number isprimeis in P.[1]The related class offunction problemsisFP. Several natural problems are complete for P, includingst-connectivity(orreachability) on alternating graphs.[2]The article onP-complete problemslists further relevant problems in P. A generalization of P isNP, which is the class ofdecision problemsdecidable by anon-deterministic Turing machinethat runs inpolynomial time. Equivalently, it is the class of decision problems where each "yes" instance has a polynomial size certificate, and certificates can be checked by a polynomial time deterministic Turing machine. The class of problems for which this is true for the "no" instances is calledco-NP. P is trivially a subset of NP and of co-NP; most experts believe it is a proper subset,[3]although this belief (theP⊊NP{\displaystyle {\mathsf {P}}\subsetneq {\mathsf {NP}}}hypothesis)remains unproven. Another open problem is whether NP = co-NP; since P = co-P,[4]a negative answer would implyP⊊NP{\displaystyle {\mathsf {P}}\subsetneq {\mathsf {NP}}}. P is also known to be at least as large asL, the class of problems decidable in alogarithmicamount ofmemory space. A decider usingO(log⁡n){\displaystyle O(\log n)}space cannot use more than2O(log⁡n)=nO(1){\displaystyle 2^{O(\log n)}=n^{O(1)}}time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory byalternating Turing machines. P is also known to be no larger thanPSPACE, the class of problems decidable in polynomial space. PSPACE is equivalent to NPSPACE bySavitch's theorem. Again, whether P = PSPACE is an open problem. To summarize: Here,EXPTIMEis the class of problems solvable in exponential time. Of all the classes shown above, only two strict containments are known: The most difficult problems in P areP-completeproblems. Another generalization of P isP/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that anadvice stringis given that depends only on the length of the input. Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical problems, including all ofBPP. If it contains NP, then thepolynomial hierarchycollapses to the second level. On the other hand, it also contains some impractical problems, including someundecidable problemssuch as the unary version of any undecidable problem. In 1999,Jin-Yi Caiand D. Sivakumar, building on work byMitsunori Ogihara, showed that if there exists asparse languagethat is P-complete, then L = P.[5] P is contained inBQP; it is unknown whether this containment is strict. Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function that is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P islowfor itself. This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such asrandom access, that can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine. Languages in P are also closed under reversal,intersection,union,concatenation,Kleene closure, inversehomomorphism, andcomplementation.[6] Some problems are known to be solvable in polynomial time, but no concrete algorithm is known for solving them. For example, theRobertson–Seymour theoremguarantees that there is a finite list offorbidden minorsthat characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determining whether a graph has a given graph as a minor. This yields anonconstructive proofthat there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem. Indescriptive complexity, P can be described as the problems expressible inFO(LFP), thefirst-order logicwith aleast fixed pointoperator added to it, on ordered structures. In Immerman's 1999 textbook on descriptive complexity,[7]Immerman ascribes this result to Vardi[8]and to Immerman.[9] It was published in 2001 that PTIME corresponds to (positive)range concatenation grammars.[10] P can also be defined as an algorithmic complexity class for problems that are not decision problems[11](even though, for example, finding the solution to a2-satisfiabilityinstance in polynomial time automatically gives a polynomial algorithm for the corresponding decision problem). In that case P is not a subset of NP, but P∩DEC is, where DEC is the class of decision problems. Kozen[12]states thatCobhamandEdmondsare "generally credited with the invention of the notion of polynomial time," thoughRabinalso invented the notion independently and around the same time (Rabin's paper[13]was in a 1967 proceedings of a 1966 conference, while Cobham's[14]was in a 1965 proceedings of a 1964 conference and Edmonds's[15]was published in a journal in 1965, though Rabin makes no mention of either and was apparently unaware of them). Cobham invented the class as a robust way of characterizing efficient algorithms, leading toCobham's thesis. However,H. C. Pocklington, in a 1910 paper,[16][17]analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that ran in (moderately) exponential time.
https://en.wikipedia.org/wiki/P_(complexity)
Inextractortheory, arandomness mergeris a function which extracts randomness out of a set of random variables, provided that at least one of them is uniformly random. Its name stems from the fact that it can be seen as a procedure which "merges" all the variables into one, preserving at least some of the entropy contained in the uniformly random variable. Mergers are currently used in order to explicitly construct randomness extractors. Consider a set ofk{\displaystyle k}random variables,X1,…,Xk{\displaystyle X_{1},\ldots ,X_{k}}, each distributed over{0,1}n{\displaystyle \{0,1\}^{n}}at least one of which is uniformly random; but it is not known which one. Furthermore, the variables may be arbitrarily correlated: they may be functions of one another, they may be constant, and so on. However, since at least one of them is uniform, the set as a whole contains at leastn{\displaystyle n}bits of entropy. The job of the merger is to output a new random variable, also distributed over{0,1}n{\displaystyle \{0,1\}^{n}}, that retains as much of that entropy as possible. Ideally, if it were known which of the variables is uniform, it could be used as the output, but that information is not known. The idea behind mergers is that by using a small additional random seed, it is possible to get a good result even without knowing which one is the uniform variable. A naive idea would be to take thexorof all the variables. If one of them is uniformly distributed andindependentof the other variables, then the output would be uniform. However, if supposeX1=X2{\displaystyle X_{1}=X_{2}}, and both of them are uniformly distributed, then the method would not work. Definition (merger): A functionM:({0,1}n)k×{0,1}d→{0,1}n{\displaystyle M:(\{0,1\}^{n})^{k}\times \{0,1\}^{d}\rightarrow \{0,1\}^{n}}is called an(m,ε){\displaystyle (m,\varepsilon )}-merger if for every set of random variables(X1,…,Xk){\displaystyle (X_{1},\ldots ,X_{k})}distributed over{0,1}n{\displaystyle \{0,1\}^{n}}, at least one of which is uniform, the distribution ofZ=M(X1,…,Xk,Ud){\displaystyle Z=M(X_{1},\ldots ,X_{k},U_{d})}has smooth min-entropyH∞ε(Z)≥m{\displaystyle H_{\infty }^{\varepsilon }(Z)\geq m}. The variableUd{\displaystyle U_{d}}denotes the uniform distribution overd{\displaystyle d}bits, and represents a truly random seed. In other words, by using a small uniform seed of lengthd{\displaystyle d}, the merger returns a string which isε{\displaystyle \varepsilon }-close to having at leastm{\displaystyle m}min-entropy; this means that itsstatistical distancefrom a string withm{\displaystyle m}min-entropy is no larger thanε{\displaystyle \varepsilon }. Reminder: There are several notions of measuring the randomness of a distribution; the min-entropy of a random variableZ{\displaystyle Z}is defined as the largestk{\displaystyle k}such that the most probable value ofZ{\displaystyle Z}occurs with probability no more than2−k{\displaystyle 2^{-k}}. The min-entropy of a string is an upper bound to the amount of randomness that can be extracted from it.[1] There are three parameters to optimize when building mergers: Explicit constructions for mergers are known with relatively good parameters. For example, Dvir andWigderson'sconstruction gives:[2]For everyα>0{\displaystyle \alpha >0}and integern{\displaystyle n}, ifk≤2o(n){\displaystyle k\leq 2^{o(n)}}, there exists an explicit(m,ε){\displaystyle (m,\varepsilon )}-mergerM:({0,1}n)k×{0,1}d→{0,1}n{\displaystyle M:(\{0,1\}^{n})^{k}\times \{0,1\}^{d}\rightarrow \{0,1\}^{n}}such that: The proof is constructive and allows building such a merger in polynomial time in the given parameters. It is possible to use mergers in order to produce randomness extractors with good parameters. Recall that anextractoris a function which takes a random variable that has high min-entropy, and returns a smaller random variable, but one that is close to uniform. An arbitrary min-entropy extractor can be obtained using the following merger-based scheme:[2][3] The essence of the scheme above is to use the merger in order to transform a string with arbitrary min-entropy into a smaller string, while not losing a lot of min-entropy in the process. This new string has very high min-entropy compared to its length, and it's then possible to use older, known, extractors which only work for those type of strings.
https://en.wikipedia.org/wiki/Randomness_merger
Fuzzy extractorsare a method that allowsbiometricdata to be used as inputs to standardcryptographictechniques, to enhance computer security. "Fuzzy", in this context, refers to the fact that the fixed values required forcryptographywill be extracted from values close to but not identical to the original key, without compromising the security required. One application is toencryptandauthenticateusers records, using the biometric inputs of the user as a key. Fuzzy extractors are a biometric tool that allows for user authentication, using a biometric template constructed from the user's biometric data as the key, by extracting a uniform and random stringR{\displaystyle R}from an inputw{\displaystyle w}, with a tolerance for noise. If the input changes tow′{\displaystyle w'}but is still close tow{\displaystyle w}, the same stringR{\displaystyle R}will be re-constructed. To achieve this, during the initial computation ofR{\displaystyle R}the process also outputs a helper stringP{\displaystyle P}which will be stored to recoverR{\displaystyle R}later and can be made public without compromising the security ofR{\displaystyle R}. The security of the process is also ensured when anadversarymodifiesP{\displaystyle P}. Once the fixed stringR{\displaystyle R}has been calculated, it can be used, for example, for key agreement between a user and a server based only on a biometric input.[1][2] One precursor to fuzzy extractors was the so-called "Fuzzy Commitment", as designed by Juels and Wattenberg.[2]Here, the cryptographic key is decommitted using biometric data. Later, Juels andSudancame up withFuzzy vaultschemes. These are order invariant for the fuzzy commitment scheme and use aReed–Solomon error correctioncode. The code word is inserted as the coefficients of a polynomial, and this polynomial is then evaluated with respect to various properties of the biometric data. Both Fuzzy Commitment and Fuzzy Vaults were precursors to Fuzzy Extractors.[citation needed] In order for fuzzy extractors to generate strong keys from biometric and other noisy data, cryptography paradigms will be applied to this biometric data. These paradigms: (1) Limit the number of assumptions about the content of the biometric data (this data comes from a variety of sources; so, in order to avoid exploitation by anadversary, it's best to assume the input is unpredictable). (2) Apply usual cryptographic techniques to the input. (Fuzzy extractors convert biometric data into secret, uniformly random, and reliably reproducible random strings.) These techniques can also have other broader applications for other type of noisy inputs such as approximative data from humanmemory, images used as passwords, and keys from quantum channels.[2]Fuzzy extractors also have applications in theproof of impossibilityof the strong notions of privacy with regard tostatistical databases.[3] Predictability indicates the probability that an adversary can guess a secret key. Mathematically speaking, the predictability of a random variableA{\displaystyle A}ismaxaP[A=a]{\displaystyle \max _{\mathrm {a} }P[A=a]}. For example, given a pair of random variableA{\displaystyle A}andB{\displaystyle B}, if the adversary knowsb{\displaystyle b}ofB{\displaystyle B}, then the predictability ofA{\displaystyle A}will bemaxaP[A=a|B=b]{\displaystyle \max _{\mathrm {a} }P[A=a|B=b]}. So, an adversary can predictA{\displaystyle A}withEb←B[maxaP[A=a|B=b]]{\displaystyle E_{b\leftarrow B}[\max _{\mathrm {a} }P[A=a|B=b]]}. We use the average overB{\displaystyle B}as it is not under adversary control, but since knowingb{\displaystyle b}makes the prediction ofA{\displaystyle A}adversarial, we take the worst case overA{\displaystyle A}. Min-entropyindicates the worst-case entropy. Mathematically speaking, it is defined asH∞(A)=−log⁡(maxaP[A=a]){\displaystyle H_{\infty }(A)=-\log(\max _{\mathrm {a} }P[A=a])}. A random variable with a min-entropy at least ofm{\displaystyle m}is called am{\displaystyle m}-source. Statistical distanceis a measure of distinguishability. Mathematically speaking, it is expressed for two probability distributionsA{\displaystyle A}andB{\displaystyle B}asSD[A,B]{\displaystyle SD[A,B]}=12∑v|P[A=v]−P[B=v]|{\displaystyle {\frac {1}{2}}\sum _{\mathrm {v} }|P[A=v]-P[B=v]|}. In any system, ifA{\displaystyle A}is replaced byB{\displaystyle B}, it will behave as the original system with a probability at least of1−SD[A,B]{\displaystyle 1-SD[A,B]}. SettingM{\displaystyle M}as astrong randomness extractor. The randomized function Ext:M→{0,1}l{\displaystyle M\rightarrow \{0,1\}^{l}}, with randomness of lengthr{\displaystyle r}, is a(m,l,ϵ){\displaystyle (m,l,\epsilon )}strong extractor for allm{\displaystyle m}-sourcesW{\displaystyle W}onM(Ext⁡(W;I),I)≈ϵ(Ul,Ur),{\displaystyle M(\operatorname {Ext} (W;I),I)\approx _{\epsilon }(U_{l},U_{r}),}whereI=Ur{\displaystyle I=U_{r}}is independent ofW{\displaystyle W}. The output of the extractor is a key generated fromw←W{\displaystyle w\leftarrow W}with the seedi←I{\displaystyle i\leftarrow I}. It behaves independently of other parts of the system, with the probability of1−ϵ{\displaystyle 1-\epsilon }. Strong extractors can extract at mostl=m−2log⁡1ϵ+O(1){\displaystyle l=m-2\log {\frac {1}{\epsilon }}+O(1)}bits from an arbitrarym{\displaystyle m}-source. Secure sketch makes it possible to reconstruct noisy input; so that, if the input isw{\displaystyle w}and the sketch iss{\displaystyle s}, givens{\displaystyle s}and a valuew′{\displaystyle w'}close tow{\displaystyle w},w{\displaystyle w}can be recovered. But the sketchs{\displaystyle s}must not reveal information aboutw{\displaystyle w}, in order to keep it secure. IfM{\displaystyle \mathbb {M} }is a metric space, a secure sketch recovers the pointw∈M{\displaystyle w\in \mathbb {M} }from any pointw′∈M{\displaystyle w'\in \mathbb {M} }close tow{\displaystyle w}, without disclosingw{\displaystyle w}itself. An(m,m~,t){\displaystyle (m,{\tilde {m}},t)}secure sketch is a pair of efficient randomized procedures (SS – Sketch; Rec – Recover) such that: (1) The sketching procedure SS takes as inputw∈M{\displaystyle w\in \mathbb {M} }and returns a strings∈{0,1}∗{\displaystyle s\in {\{0,1\}^{*}}}. (2) Correctness: Ifdis(w,w′)≤t{\displaystyle dis(w,w')\leq t}thenRec(w′,SS(w))=w{\displaystyle Rec(w',SS(w))=w}. (3) Security: For anym{\displaystyle m}-source overM{\displaystyle M}, the min-entropy ofW{\displaystyle W}, givens{\displaystyle s}, is high: Fuzzy extractors do not recover the original input but generate a stringR{\displaystyle R}(which is close to uniform) fromw{\displaystyle w}and allow its subsequent reproduction (using helper stringP{\displaystyle P}) given anyw′{\displaystyle w'}close tow{\displaystyle w}. Strong extractors are a special case of fuzzy extractors whent{\displaystyle t}= 0 andP=I{\displaystyle P=I}. An(m,l,t,ϵ){\displaystyle (m,l,t,\epsilon )}fuzzy extractor is a pair of efficient randomized procedures (Gen – Generate and Rep – Reproduce) such that: (1) Gen, givenw∈M{\displaystyle w\in \mathbb {M} }, outputs an extracted stringR∈{0,1}l{\displaystyle R\in {\mathbb {\{} 0,1\}^{l}}}and a helper stringP∈{0,1}∗{\displaystyle P\in {\mathbb {\{} 0,1\}^{*}}}. (2) Correctness: Ifdis(w,w′)≤t{\displaystyle dis(w,w')\leq t}and(R,P)←Gen(w){\displaystyle (R,P)\leftarrow Gen(w)}, thenRep(w′,P)=R{\displaystyle Rep(w',P)=R}. (3) Security: For all m-sourcesW{\displaystyle W}overM{\displaystyle M}, the stringR{\displaystyle R}is nearly uniform, even givenP{\displaystyle P}. So, whenH~∞(W|E)≥m{\displaystyle {\tilde {H}}_{\mathrm {\infty } }(W|E)\geq m}, then(R,P,E)≈(Ul,P,E){\displaystyle (R,P,E)\approx (U_{\mathrm {l} },P,E)}. So Fuzzy extractors output almost uniform random sequences of bits which are a prerequisite for using cryptographic applications (as secret keys). Since the output bits are slightly non-uniform, there's a risk of a decreased security; but the distance from a uniform distribution is no more thanϵ{\displaystyle \epsilon }. As long as this distance is sufficiently small, the security will remain adequate. Secure sketches can be used to construct fuzzy extractors: for example, applying SS tow{\displaystyle w}to obtains{\displaystyle s}, and strong extractor Ext, with randomnessx{\displaystyle x}, tow{\displaystyle w}, to getR{\displaystyle R}.(s,x){\displaystyle (s,x)}can be stored as helper stringP{\displaystyle P}.R{\displaystyle R}can be reproduced byw′{\displaystyle w'}andP=(s,x){\displaystyle P=(s,x)}.Rec(w′,s){\displaystyle Rec(w',s)}can recoverw{\displaystyle w}andExt(w,x){\displaystyle Ext(w,x)}can reproduceR{\displaystyle R}. The following lemma formalizes this. Assume (SS,Rec) is an(M,m,m~,t){\displaystyle (M,m,{\tilde {m}},t)}secure sketch and let Ext be an average-case(n,m~,l,ϵ){\displaystyle (n,{\tilde {m}},l,\epsilon )}strong extractor. Then the following (Gen, Rep) is an(M,m,l,t,ϵ){\displaystyle (M,m,l,t,\epsilon )}fuzzy extractor: (1) Gen(w,r,x){\displaystyle (w,r,x)}: setP=(SS(w;r),x),R=Ext(w;x),{\displaystyle P=(SS(w;r),x),R=Ext(w;x),}and output(R,P){\displaystyle (R,P)}. (2) Rep(w′,(s,x)){\displaystyle (w',(s,x))}: recoverw=Rec(w′,s){\displaystyle w=Rec(w',s)}and outputR=Ext(w;x){\displaystyle R=Ext(w;x)}. Proof: If (SS,Rec) is an(M,m,m~,t){\displaystyle (M,m,{\tilde {m}},t)}secure sketch and Ext is an(n,m~−log(1δ),l,ϵ){\displaystyle (n,{\tilde {m}}-log({\frac {1}{\delta }}),l,\epsilon )}strong extractor,then the above construction (Gen, Rep) is a(M,m,l,t,ϵ+δ){\displaystyle (M,m,l,t,\epsilon +\delta )}fuzzy extractor. The cited paper includes many generic combinatorial bounds on secure sketches and fuzzy extractors.[2] Due to their error-tolerant properties, secure sketches can be treated, analyzed, and constructed like a(n,k,d)F{\displaystyle (n,k,d)_{\mathcal {F}}}generalerror-correcting codeor[n,k,d]F{\displaystyle [n,k,d]_{\mathcal {F}}}forlinearcodes, wheren{\displaystyle n}is the length of codewords,k{\displaystyle k}is the length of the message to be coded,d{\displaystyle d}is the distance between codewords, andF{\displaystyle {\mathcal {F}}}is the alphabet. IfFn{\displaystyle {\mathcal {F}}^{n}}is the universe of possible words then it may be possible to find an error correcting codeC⊂Fn{\displaystyle C\subset {\mathcal {F}}^{n}}such that there exists a unique codewordc∈C{\displaystyle c\in C}for everyw∈Fn{\displaystyle w\in {\mathcal {F}}^{n}}with aHamming distanceofdisHam(c,w)≤(d−1)/2{\displaystyle dis_{Ham}(c,w)\leq (d-1)/2}. The first step in constructing a secure sketch is determining the type of errors that will likely occur and then choosing a distance to measure. When there is no risk of data being deleted and only of its being corrupted, then the best measurement to use for error correction is the Hamming distance. There are two common constructions for correcting Hamming errors, depending on whether the code is linear or not. Both constructions start with an error-correcting code that has a distance of2t+1{\displaystyle 2t+1}wheret{\displaystyle {t}}is the number of tolerated errors. When using a(n,k,2t+1)F{\displaystyle (n,k,2t+1)_{\mathcal {F}}}general code, assign a uniformly random codewordc∈C{\displaystyle c\in C}to eachw{\displaystyle w}, then letSS(w)=s=w−c{\displaystyle SS(w)=s=w-c}which is the shift needed to changec{\displaystyle c}intow{\displaystyle w}. To fix errors inw′{\displaystyle w'}, subtracts{\displaystyle s}fromw′{\displaystyle w'}, then correct the errors in the resulting incorrect codeword to getc{\displaystyle c}, and finally adds{\displaystyle s}toc{\displaystyle c}to getw{\displaystyle w}. This meansRec(w′,s)=s+dec(w′−s)=w{\displaystyle Rec(w',s)=s+dec(w'-s)=w}. This construction can achieve the best possible tradeoff between error tolerance and entropy loss whenF≥n{\displaystyle {\mathcal {F}}\geq n}and aReed–Solomon codeis used, resulting in an entropy loss of2tlog⁡(F){\displaystyle 2t\log({\mathcal {F}})}. The only way to improve upon this result would be to find a code better than Reed–Solomon. When using a[n,k,2t+1]F{\displaystyle [n,k,2t+1]_{\mathcal {F}}}linear code, let theSS(w)=s{\displaystyle SS(w)=s}be thesyndromeofw{\displaystyle w}. To correctw′{\displaystyle w'}, find a vectore{\displaystyle e}such thatsyn(e)=syn(w′)−s{\displaystyle syn(e)=syn(w')-s}; thenw=w′−e{\displaystyle w=w'-e}. When working with a very large alphabet or very long strings resulting in a very large universeU{\displaystyle {\mathcal {U}}}, it may be more efficient to treatw{\displaystyle w}andw′{\displaystyle w'}as sets and look atset differencesto correct errors. To work with a large setw{\displaystyle w}it is useful to look at its characteristic vectorxw{\displaystyle x_{w}}, which is a binary vector of lengthn{\displaystyle n}that has a value of 1 when an elementa∈U{\displaystyle a\in {\mathcal {U}}}anda∈w{\displaystyle a\in w}, or 0 whena∉w{\displaystyle a\notin w}. The best way to decrease the size of a secure sketch whenn{\displaystyle n}is large is to makek{\displaystyle k}large, since the size is determined byn−k{\displaystyle n-k}. A good code on which to base this construction is a[n,n−tα,2t+1]2{\displaystyle [n,n-t\alpha ,2t+1]_{2}}BCH code, wheren=2α−1{\displaystyle n=2^{\alpha }-1}andt≪n{\displaystyle t\ll n}, so thatk≤n−log(nt){\displaystyle k\leq n-log{n \choose {t}}}. It is useful that BCH codes can be decoded in sub-linear time. LetSS(w)=s=syn(xw){\displaystyle SS(w)=s=syn(x_{w})}. To correctw′{\displaystyle w'}, first findSS(w′)=s′=syn(xw′){\displaystyle SS(w')=s'=syn(x_{w}')}, then find a set v wheresyn(xv)=s′−s{\displaystyle syn(x_{v})=s'-s}, and finally compute thesymmetric difference, to getRec(w′,s)=w′△v=w{\displaystyle Rec(w',s)=w'\triangle v=w}. While this is not the only construction that can be used to set the difference, it is the easiest one. When data can be corrupted or deleted, the best measurement to use isedit distance. To make a construction based on edit distance, the easiest way is to start with a construction for set difference or hamming distance as an intermediate correction step, and then build the edit distance construction around that. There are many other types of errors and distances that can be used to model other situations. Most of these other possible constructions are built upon simpler constructions, such as edit-distance constructions. It can be shown that the error tolerance of a secure sketch can be improved by applying aprobabilistic methodto error correction with a high probability of success. This allows potential code words to exceed thePlotkin bound, which has a limit ofn/4{\displaystyle n/4}error corrections, and to approachShannon's bound, which allows for nearlyn/2{\displaystyle n/2}corrections. To achieve this enhanced error correction, a less restrictive error distribution model must be used. For this most restrictive model, use aBSCp{\displaystyle _{p}}to create aw′{\displaystyle w'}with a probabilityp{\displaystyle p}at each position inw′{\displaystyle w'}that the bit received is wrong. This model can show that entropy loss is limited tonH(p)−o(n){\displaystyle nH(p)-o(n)}, whereH{\displaystyle H}is thebinary entropy function.If min-entropym≥n(H(12−γ))+ε{\displaystyle m\geq n(H({\frac {1}{2}}-\gamma ))+\varepsilon }thenn(12−γ){\displaystyle n({\frac {1}{2}}-\gamma )}errors can be tolerated, for some constantγ>0{\displaystyle \gamma >0}. For this model, errors do not have a known distribution and can be from an adversary, the only constraints beingdiserr≤t{\displaystyle dis_{\text{err}}\leq t}and that a corrupted word depends only on the inputw{\displaystyle w}and not on the secure sketch. It can be shown for this error model that there will never be more thant{\displaystyle t}errors, since this model can account for all complex noise processes, meaning that Shannon's bound can be reached; to do this a random permutation is prepended to the secure sketch that will reduce entropy loss. This model differs from the input-dependent model by having errors that depend on both the inputw{\displaystyle w}and the secure sketch, and an adversary is limited to polynomial-time algorithms for introducing errors. Since algorithms that can run in better-than-polynomial-time are not currently feasible in the real world, then a positive result using this error model would guarantee that any errors can be fixed. This is the least restrictive model, where the only known way to approach Shannon's bound is to uselist-decodable codes, although this may not always be useful in practice, since returning a list, instead of a single code word, may not always be acceptable. In general, a secure system attempts to leak as little information as possible to anadversary. In the case of biometrics, if information about the biometric reading is leaked, the adversary may be able to learn personal information about a user. For example, an adversary notices that there is a certain pattern in the helper strings that implies the ethnicity of the user. We can consider this additional information a functionf(W){\displaystyle f(W)}. If an adversary were to learn a helper string, it must be ensured that, from this data he can not infer any data about the person from whom the biometric reading was taken. Ideally the helper stringP{\displaystyle P}would reveal no information about the biometric inputw{\displaystyle w}. This is only possible when every subsequent biometric readingw′{\displaystyle w'}is identical to the originalw{\displaystyle w}. In this case, there is actually no need for the helper string; so, it is easy to generate a string that is in no way correlated tow{\displaystyle w}. Since it is desirable to accept biometric inputw′{\displaystyle w'}similar tow{\displaystyle w}, the helper stringP{\displaystyle P}must be somehow correlated. The more differentw{\displaystyle w}andw′{\displaystyle w'}are allowed to be, the more correlation there will be betweenP{\displaystyle P}andw{\displaystyle w}; the more correlated they are, the more informationP{\displaystyle P}reveals aboutw{\displaystyle w}. We can consider this information to be a functionf(W){\displaystyle f(W)}. The best possible solution is to make sure an adversary can't learn anything useful from the helper string. A probabilistic mapY(){\displaystyle Y()}hides the results of functions with a small amount of leakageϵ{\displaystyle \epsilon }. The leakage is the difference in probability two adversaries have of guessing some function, when one knows the probabilistic map and one does not. Formally: If the functionGen⁡(W){\displaystyle \operatorname {Gen} (W)}is a probabilistic map, then even if an adversary knows both the helper stringP{\displaystyle P}and the secret stringR{\displaystyle R}, they are only negligibly more likely figure something out about the subject that if they knew nothing. The stringR{\displaystyle R}is supposed to be kept secret; so, even if it is leaked (which should be very unlikely)m the adversary can still figure out nothing useful about the subject, as long asϵ{\displaystyle \epsilon }is small. We can considerf(W){\displaystyle f(W)}to be any correlation between the biometric input and some physical characteristic of the person. SettingY=Gen⁡(W)=R,P{\displaystyle Y=\operatorname {Gen} (W)=R,P}in the above equation changes it to: This means that if one adversaryA1{\displaystyle A_{1}}has(R,P){\displaystyle (R,P)}and a second adversaryA2{\displaystyle A_{2}}knows nothing, their best guesses atf(W){\displaystyle f(W)}are onlyϵ{\displaystyle \epsilon }apart. Uniform fuzzy extractors are a special case of fuzzy extractors, where the output(R,P){\displaystyle (R,P)}ofGen(W){\displaystyle Gen(W)}is negligibly different from strings picked from the uniform distribution, i.e.(R,P)≈ϵ(Uℓ,U|P|){\displaystyle (R,P)\approx _{\epsilon }(U_{\ell },U_{|P|})}. Since secure sketches imply fuzzy extractors, constructing a uniform secure sketch allows for the easy construction of a uniform fuzzy extractor. In a uniform secure sketch, the sketch procedureSS(w){\displaystyle SS(w)}is arandomness extractorExt(w;i){\displaystyle Ext(w;i)}, wherew{\displaystyle w}is the biometric input andi{\displaystyle i}is therandom seed. Since randomness extractors output a string that appears to be from a uniform distribution, they hide all information about their input. Extractor sketches can be used to construct(m,t,ϵ){\displaystyle (m,t,\epsilon )}-fuzzy perfectly one-way hash functions. When used as a hash function the inputw{\displaystyle w}is the object you want to hash. TheP,R{\displaystyle P,R}thatGen(w){\displaystyle Gen(w)}outputs is the hash value. If one wanted to verify that aw′{\displaystyle w'}withint{\displaystyle t}from the originalw{\displaystyle w}, they would verify thatRep(w′,P)=R{\displaystyle Rep(w',P)=R}. Such fuzzy perfectly one-way hash functions are specialhash functionswhere they accept any input with at mostt{\displaystyle t}errors, compared to traditional hash functions which only accept when the input matches the original exactly. Traditional cryptographic hash functions attempt to guarantee that is it is computationally infeasible to find two different inputs that hash to the same value. Fuzzy perfectly one-way hash functions make an analogous claim. They make it computationally infeasible two find two inputs that are more thant{\displaystyle t}Hamming distanceapart and hash to the same value. An active attack could be one where an adversary can modify the helper stringP{\displaystyle P}. If an adversary is able to changeP{\displaystyle P}to another string that is also acceptable to the reproduce functionRep(W,P){\displaystyle Rep(W,P)}, it causesRep(W,P){\displaystyle Rep(W,P)}to output an incorrect secret stringR~{\displaystyle {\tilde {R}}}. Robust fuzzy extractors solve this problem by allowing the reproduce function to fail, if a modified helper string is provided as input. One method of constructing robust fuzzy extractors is to usehash functions. This construction requires two hash functionsH1{\displaystyle H_{1}}andH2{\displaystyle H_{2}}. TheGen(W){\displaystyle Gen(W)}function produces the helper stringP{\displaystyle P}by appending the output of a secure sketchs=SS(w){\displaystyle s=SS(w)}to the hash of both the readingw{\displaystyle w}and secure sketchs{\displaystyle s}. It generates the secret stringR{\displaystyle R}by applying the second hash function tow{\displaystyle w}ands{\displaystyle s}. Formally: Gen(w):s=SS(w),return:P=(s,H1(w,s)),R=H2(w,s){\displaystyle Gen(w):s=SS(w),return:P=(s,H_{1}(w,s)),R=H_{2}(w,s)} The reproduce functionRep(W,P){\displaystyle Rep(W,P)}also makes use of the hash functionsH1{\displaystyle H_{1}}andH2{\displaystyle H_{2}}. In addition to verifying that the biometric input is similar enough to the one recovered using theRec(W,S){\displaystyle Rec(W,S)}function, it also verifies that the hash in the second part ofP{\displaystyle P}was actually derived fromw{\displaystyle w}ands{\displaystyle s}. If both of those conditions are met, it returnsR{\displaystyle R}, which is itself the second hash function applied tow{\displaystyle w}ands{\displaystyle s}. Formally: Rep(w′,P~):{\displaystyle Rep(w',{\tilde {P}}):}Gets~{\displaystyle {\tilde {s}}}andh~{\displaystyle {\tilde {h}}}fromP~;w~=Rec(w′,s~).{\displaystyle {\tilde {P}};{\tilde {w}}=Rec(w',{\tilde {s}}).}IfΔ(w~,w′)≤t{\displaystyle \Delta ({\tilde {w}},w')\leq t}andh~=H1(w~,s~){\displaystyle {\tilde {h}}=H_{1}({\tilde {w}},{\tilde {s}})}thenreturn:H2(w~,s~){\displaystyle return:H_{2}({\tilde {w}},{\tilde {s}})}elsereturn:fail{\displaystyle return:fail} IfP{\displaystyle P}has been tampered with, it will be obvious, becauseRep{\displaystyle Rep}will fail on output with very high probability. To cause the algorithm to accept a differentP{\displaystyle P}, an adversary would have to find aw~{\displaystyle {\tilde {w}}}such thatH1(w,s)=H1(w~,s~){\displaystyle H_{1}(w,s)=H_{1}({\tilde {w}},{\tilde {s}})}. Since hash function are believed to beone-way functions, it is computationally infeasible to find such aw~{\displaystyle {\tilde {w}}}. SeeingP{\displaystyle P}would provide an adversary with no useful information. Since, again, hash function are one-way functions, it is computationally infeasible for an adversary to reverse the hash function and figure outw{\displaystyle w}. Part ofP{\displaystyle P}is the secure sketch, but by definition the sketch reveals negligible information about its input. Similarly seeingR{\displaystyle R}(even though it should never see it) would provide an adversary with no useful information, as an adversary wouldn't be able to reverse the hash function and see the biometric input.
https://en.wikipedia.org/wiki/Fuzzy_extractor
Instatisticsanddata mining,affinity propagation(AP) is aclustering algorithmbased on the concept of "message passing" between data points.[1]Unlike clustering algorithms such ask-meansork-medoids, affinity propagation does not require the number of clusters to be determined or estimated before running the algorithm. Similar tok-medoids, affinity propagation finds "exemplars," members of the input set that are representative of clusters.[1] Letx1throughxnbe a set of data points, with no assumptions made about their internal structure, and letsbe a function that quantifies the similarity between any two points, such thats(i,j) >s(i,k)if and only ifxiis more similar toxjthan toxk. For this example, the negative squared distance of two data points was used i.e. for pointsxiandxk,s(i,k)=−‖xi−xk‖2{\displaystyle s(i,k)=-\left\|x_{i}-x_{k}\right\|^{2}}.[1] The diagonal ofs(i.e.s(i,i){\displaystyle s(i,i)}) is particularly important, as it represents the instance preference, meaning how likely a particular instance is to become an exemplar. When it is set to the same value for all inputs, it controls how many classes the algorithm produces. A value close to the minimum possible similarity produces fewer classes, while a value close to or larger than the maximum possible similarity produces many classes. It is typically initialized to the median similarity of all pairs of inputs. The algorithm proceeds by alternating between two message-passing steps, which update two matrices:[1] Both matrices are initialized to all zeroes, and can be viewed aslog-probabilitytables. The algorithm then performs the following updates iteratively: Iterations are performed until either the cluster boundaries remain unchanged over a number of iterations, or some predetermined number (of iterations) is reached. The exemplars are extracted from the final matrices as those whose 'responsibility + availability' for themselves is positive (i.e.(r(i,i)+a(i,i))>0{\displaystyle (r(i,i)+a(i,i))>0}). The inventors of affinity propagation showed it is better for certain computer vision andcomputational biologytasks, e.g. clustering of pictures of human faces and identifying regulated transcripts, thank-means,[1]even whenk-means was allowed many random restarts and initialized usingPCA.[2]A study comparing affinity propagation andMarkov clusteringonprotein interaction graphpartitioning found Markov clustering to work better for that problem.[3]A semi-supervised variant has been proposed fortext miningapplications.[4]Another recent application was in economics, when the affinity propagation was used to find some temporal patterns in the output multipliers of the US economy between 1997 and 2017.[5]
https://en.wikipedia.org/wiki/Affinity_propagation
Alatent space, also known as alatent feature spaceorembedding space, is anembeddingof a set of items within amanifoldin which items resembling each other are positioned closer to one another. Position within the latent space can be viewed as being defined by a set oflatent variablesthat emerge from the resemblances from the objects. In most cases, thedimensionalityof the latent space is chosen to be lower than the dimensionality of thefeature spacefrom which the data points are drawn, making the construction of a latent space an example ofdimensionality reduction, which can also be viewed as a form ofdata compression.[1]Latent spaces are usually fit[clarification needed]viamachine learning, and they can then be used as feature spaces in machine learning models, including classifiers and other supervised predictors. The interpretation of the latent spaces of machine learning models is an active field of study, but latent space interpretation is difficult to achieve. Due to the black-box nature of machine learning models, the latent space may be completely unintuitive. Additionally, the latent space may be high-dimensional, complex, and nonlinear, which may add to the difficulty of interpretation.[2]Some visualization techniques have been developed to connect the latent space to the visual world, but there is often not a direct connection between the latent space interpretation and the model itself. Such techniques includet-distributed stochastic neighbor embedding(t-SNE), where the latent space is mapped to two dimensions for visualization. Latent space distances lack physical units, so the interpretation of these distances may depend on the application.[3] Several embedding models have been developed to perform this transformation to create latent space embeddings given a set of data items and asimilarity function. These models learn the embeddings by leveraging statistical techniques and machine learning algorithms. Here are some commonly used embedding models: Multimodality refers to the integration and analysis of multiple modes or types of data within a single model or framework. Embedding multimodal data involves capturing relationships and interactions between different data types, such as images, text, audio, and structured data. Multimodal embedding models aim to learn joint representations that fuse information from multiple modalities, allowing for cross-modal analysis and tasks. These models enable applications like image captioning, visual question answering, and multimodal sentiment analysis. To embed multimodal data, specialized architectures such as deep multimodal networks or multimodal transformers are employed. These architectures combine different types of neural network modules to process and integrate information from various modalities. The resulting embeddings capture the complex relationships between different data types, facilitating multimodal analysis and understanding. Embedding latent space and multimodal embedding models have found numerous applications across various domains:
https://en.wikipedia.org/wiki/Latent_space
Indata analysis, theself-similarity matrixis a graphical representation ofsimilarsequences in a data series. Similarity can be explained by different measures, like spatial distance (distance matrix),correlation, or comparison of localhistogramsorspectral properties(e.g. IXEGRAM[1]). A similarity plot can be the starting point fordot plotsorrecurrence plots. To construct a self-similarity matrix, one first transforms a data series into an ordered sequence offeature vectorsV=(v1,v2,…,vn){\displaystyle V=(v_{1},v_{2},\ldots ,v_{n})}, where each vectorvi{\displaystyle v_{i}}describes the relevant features of a data series in a given local interval. Then the self-similarity matrix is formed by computing the similarity of pairs of feature vectors wheres(vj,vk){\displaystyle s(v_{j},v_{k})}is a function measuring the similarity of the two vectors, for instance, theinner products(vj,vk)=vj⋅vk{\displaystyle s(v_{j},v_{k})=v_{j}\cdot v_{k}}. Then similar segments of feature vectors will show up as path of high similarity along diagonals of the matrix.[2]Similarity plots are used for action recognition that is invariant to point of view[3]and for audio segmentation usingspectral clusteringof the self-similarity matrix.[4]
https://en.wikipedia.org/wiki/Self-similarity_matrix
Semantic similarityis ametricdefined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning orsemantic content[citation needed]as opposed tolexicographicalsimilarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature.[1][2]The term semantic similarity is often confused with semantic relatedness.Semantic relatednessincludes any relation between two terms, while semantic similarity only includes"is a"relations.[3]For example, "car" is similar to "bus", but is also related to "road" and "driving". Computationally, semantic similarity can be estimated by defining atopologicalsimilarity, by usingontologiesto define the distance between terms/concepts. For example, a naive metric for the comparison of concepts ordered in apartially ordered setand represented as nodes of adirected acyclic graph(e.g., ataxonomy), would be the shortest-path linking the two concept nodes. Based on text analyses, semantic relatedness between units of language (e.g., words, sentences) can also be estimated using statistical means such as avector space modeltocorrelatewords and textual contexts from a suitabletext corpus. The evaluation of the proposed semantic similarity / relatedness measures are evaluated through two main ways. The former is based on the use of datasets designed by experts and composed of word pairs with semantic similarity / relatedness degree estimation. The second way is based on the integration of the measures inside specific applications such as information retrieval, recommender systems, natural language processing, etc. The concept ofsemantic similarityis more specific thansemantic relatedness, as the latter includes concepts asantonymyandmeronymy, while similarity does not.[4]However, much of the literature uses these terms interchangeably, along with terms like semantic distance. In essence, semantic similarity, semantic distance, and semantic relatedness all mean, "How much does term A have to do with term B?" The answer to this question is usually a number between −1 and 1, or between 0 and 1, where 1 signifies extremely high similarity. An intuitive way of visualizing the semantic similarity of terms is by grouping together terms which are closely related and spacing wider apart the ones which are distantly related. This is also common in practice formind mapsandconcept maps. A more direct way of visualizing the semantic similarity of two linguistic items can be seen with theSemantic Foldingapproach. In this approach a linguistic item such as a term or a text can be represented by generating apixelfor each of its active semantic features in e.g. a 128 x 128 grid. This allows for a direct visual comparison of the semantics of two items by comparing image representations of their respective feature sets. Semantic similarity measures have been applied and developed in biomedical ontologies.[5][6]They are mainly used to comparegenesandproteinsbased on the similarity of their functions[7]rather than on theirsequence similarity, but they are also being extended to other bioentities, such as diseases.[8] These comparisons can be done using tools freely available on the web: Similarity is also applied ingeoinformaticsto find similargeographic featuresor feature types:[12] Several metrics useWordNet, a manually constructed lexical database of English words. Despite the advantages of having human supervision in constructing the database, since the words are not automatically learned the database cannot measure relatedness between multi-word term, non-incremental vocabulary.[4][18] Natural language processing(NLP) is a field of computer science and linguistics. Sentiment analysis, Natural language understanding and Machine translation (Automatically translate text from one human language to another) are a few of the major areas where it is being used. For example, knowing one information resource in the internet, it is often of immediate interest to find similar resources. TheSemantic Webprovides semantic extensions to find similar data by content and not just by arbitrary descriptors.[19][20][21][22][23][24][25][26][27]Deep learningmethods have become an accurate way to gauge semantic similarity between two text passages, in which each passage is first embedded into a continuous vector representation.[28][29][30] Semantic similarity plays a crucial role inontology alignment, which aims to establish correspondences betweenentitiesfrom different ontologies. It involves quantifying the degree of similarity between concepts or terms using the information present in the ontology for each entity, such as labels, descriptions, and hierarchical relations to other entities. Traditional metrics used in ontology matching are based on a lexical similarity between features of the entities, such as using the Levenshtein distance to measure the edit distance between entity labels.[31]However, it is difficult to capture the semantic similarity between entities using these metrics. For example, when comparing two ontologies describing conferences, the entities "Contribution" and "Paper" may have high semantic similarity since they share the same meaning. Nonetheless, due to their lexical differences, lexicographical similarity alone cannot establish this alignment. To capture these semantic similarities,embeddingsare being adopted in ontology matching.[32]By encoding semantic relationships and contextual information, embeddings enable the calculation of similarity scores between entities based on the proximity of their vector representations in the embedding space. This approach allows for efficient and accurate matching of ontologies since embeddings can model semantic differences in entity naming, such as homonymy, by assigning different embeddings to the same word based on different contexts.[32] There are essentially two types of approaches that calculate topological similarity between ontological concepts: Other measures calculate the similarity between ontological instances: Some examples: Statistical similarity approaches can be learned from data, or predefined.Similarity learningcan often outperform predefined similarity measures. Broadly speaking, these approaches build a statistical model of documents, and use it to estimate similarity. Researchers have collected datasets with similarity judgements on pairs of words, which are used to evaluate the cognitive plausibility of computational measures. The golden standard up to today is an old 65 word list where humans have judged the word similarity.[57][58]
https://en.wikipedia.org/wiki/Semantic_similarity
Similarityin network analysis occurs when two nodes (or other more elaborate structures) fall in the same equivalence class. There are three fundamental approaches to constructing measures of network similarity: structural equivalence, automorphic equivalence, and regular equivalence.[1]There is a hierarchy of the three equivalence concepts: any set of structural equivalences are also automorphic and regular equivalences. Any set of automorphic equivalences are also regular equivalences. Not all regular equivalences are necessarily automorphic or structural; and not all automorphic equivalences are necessarily structural.[2] AgglomerativeHierarchical clusteringof nodes on the basis of the similarity of their profiles of ties to other nodes provides a joining tree orDendrogramthat visualizes the degree of similarity among cases - and can be used to find approximate equivalence classes.[2] Usually, our goal in equivalence analysis is to identify and visualize "classes" or clusters of cases. In using cluster analysis, we are implicitly assuming that the similarity or distance among cases reflects a single underlying dimension. It is possible, however, that there are multiple "aspects" or "dimensions" underlying the observed similarities of cases. Factor or components analysis could be applied to correlations or covariances among cases. Alternatively, multi-dimensional scaling could be used (non-metric for data that are inherently nominal or ordinal; metric for valued).[2] MDS represents the patterns of similarity or dissimilarity in the tie profiles among the actors (when applied to adjacency or distances) as a "map" in multi-dimensional space. This map lets us see how "close" actors are, whether they "cluster" in multi-dimensional space and how much variation there is along each dimension.[2] Two vertices of a network are structurally equivalent if they share many of the same neighbors. There is no actor who has exactly the same set of ties as actor A, so actor A is in a class by itself. The same is true for actors B, C, D and G. Each of these nodes has a unique set of edges to other nodes. E and F, however, fall in the same structural equivalence class. Each has only one edge; and that tie is to B. Since E and F have exactly the same pattern of edges with all the vertices, they are structurally equivalent. The same is true in the case of H and I.[2] Structural equivalence is the strongest form of similarity. In many real networks, exact equivalence may be rare, and it could be useful to ease the criteria and measure approximate equivalence. A closely related concept isinstitutional equivalence: two actors (e.g., firms) are institutionally equivalent if they operate in the same set of institutional fields.[3]While structurally equivalent actors have identical relational patterns or network positions, institutional equivalence captures the similarity of institutional influences that actors experience from being in the same fields, regardless of how similar their network positions are. For example, two banks in Chicago might have very different patterns of ties (e.g., one may be a central node, and the other may be in a peripheral position) such that they are not structural equivalents, but because they both operate in the field of finance and banking and in the same geographically defined field (Chicago), they will be subject to some of the same institutional influences.[3] A simple count of common neighbors for two vertices is not on its own a very good measure. One should know the degree of the vertices or how many common neighbors other pairs of vertices has.Cosine similaritytakes into account these regards and also allow for varying degrees of vertices. Salton proposed that we regard the i-th and j-th rows/columns of the adjacency matrix as two vectors and use the cosine of the angle between them as asimilarity measure. The cosine similarity of i and j is the number of common neighbors divided by the geometric mean of their degrees.[4] Its value lies in the range from 0 to 1. The value of 1 indicates that the two vertices have exactly the same neighbors while the value of zero means that they do not have any common neighbors. Cosine similarity is technically undefined if one or both of the nodes has zero degree, but according to the convention, we say that cosine similarity is 0 in these cases.[1] Pearson product-moment correlation coefficientis an alternative method to normalize the count of common neighbors. This method compares the number of common neighbors with the expected value that count would take in a network where vertices are connected randomly. This quantity lies strictly in the range from -1 to 1.[1] Euclidean distanceis equal to the number of neighbors that differ between two vertices. It is rather a dissimilarity measure, since it is larger for vertices which differ more. It could be normalized by dividing by its maximum value. The maximum means that there are no common neighbors, in which case the distance is equal to the sum of the degrees of the vertices.[1] Formally "Two vertices are automorphically equivalent if all the vertices can be re-labeled to form an isomorphic graph with the labels of u and v interchanged. Two automorphically equivalent vertices share exactly the same label-independent properties."[5] More intuitively, actors are automorphically equivalent if we can permute the graph in such a way that exchanging the two actors has no effect on the distances among all actors in the graph. Suppose the graph describes the organizational structure of a company. Actor A is the central headquarter, actors B, C, and D are managers. Actors E, F and H, I are workers at smaller stores; G is the lone worker at another store. Even though actor B and actor D are not structurally equivalent (they do have the same boss, but not the same workers), they do seem to be "equivalent" in a different sense. Both manager B and D have a boss (in this case, the same boss), and each has two workers. If we swapped them, and also swapped the four workers, all of the distances among all the actors in the network would be exactly identical. There are actually five automorphic equivalence classes: {A}, {B, D}, {C}, {E, F, H, I}, and {G}. Note that the less strict definition of "equivalence" has reduced the number of classes.[2] Formally, "Two actors are regularly equivalent if they are equally related to equivalent others." In other words, regularly equivalent vertices are vertices that, while they do not necessarily share neighbors, have neighbors who are themselves similar.[5] Two mothers, for example, are equivalent, because each has a similar pattern of connections with a husband, children, etc. The two mothers do not have ties to the same husband or the same children, so they are not structurally equivalent. Because different mothers may have different numbers of husbands and children, they will not be automorphically equivalent. But they are similar because they have the same relationships with some member or members of another set of actors (who are themselves regarded as equivalent because of the similarity of their ties to a member of the set "mother").[2] In the graph there are three regular equivalence classes. The first is actor A; the second is composed of the three actors B, C, and D; the third consists of the remaining five actors E, F, G, H, and I. The easiest class to see is the five actors across the bottom of the diagram (E, F, G, H, and I). These actors are regularly equivalent to one another because: Each of the five actors, then, has an identical pattern of ties with actors in the other classes. Actors B, C, and D form a class similarly. B and D actually have ties with two members of the third class, whereas actor C has a tie to only one member of the third class, but this doesn't matter, as there is a tie to some member of the third class. Actor A is in a class by itself, defined by:
https://en.wikipedia.org/wiki/Similarity_(network_science)
In philosophy,similarityorresemblanceis a relation between objects that constitutes how much these objects are alike. Similarity comes in degrees: e.g. oranges are more similar to apples than to the moon. It is traditionally seen as aninternal relationand analyzed in terms of sharedproperties: two things are similar because they have a property in common.[1]The more properties they share, the more similar they are. They resemble each other exactly if they share all their properties. So an orange is similar to the moon because they both share the property of being round, but it is even more similar to an apple because additionally, they both share various other properties, like the property of being a fruit. On a formal level, similarity is usually considered to be a relation that isreflexive(everything resembles itself),symmetric(ifais similar tobthenbis similar toa) andnon-transitive(aneed not resemblecdespitearesemblingbandbresemblingc).[2]Similarity comes in two forms:respective similarity, which is relative to one respect or feature, andoverall similarity, which expresses the degree of resemblance between two objects all things considered. There is no general consensus whether similarity is an objective,mind-independent feature of reality, and, if so, whether it is a fundamental feature or reducible to other features.[3][4]Resemblance is central to human cognition since it provides the basis for the categorization of entities into kinds and for various other cognitive processes likeanalogical reasoning.[3][5]Similarity has played a central role in various philosophical theories, e.g. as a solution to the problem of universals throughresemblance nominalismor in the analysis ofcounterfactualsin terms of similarity between possible worlds.[6][7] Conceptions of similaritygive an account of similarity and its degrees on a metaphysical level. The simplest view, though not very popular, sees resemblance as a fundamental aspect of reality that cannot be reduced to other aspects.[3][8]The more common view is that the similarity between two things is determined by other facts, for example, by the properties they share, by their qualitative distance or by the existence of certain transformations between them.[5][9]These conceptions analyze resemblance in terms of other aspects instead of treating it as a fundamental relation. Thenumerical conceptionholds that the degree of similarity between objects is determined by the number of properties they have in common.[10]On the most basic version of this view, the degree of similarity is identical to this number. For example, "[i]f the properties of peas in a pod were just greenness, roundness and yuckiness ... then their degree of similarity would be three".[11]Two things need to share at least one property to be considered similar. They resemble each other exactly if they have all their properties in common. This is also known asqualitative identityorindiscernibility. For thenumerical conceptionof similarity to work, it is important that only properties relevant to resemblance are taken into account, sometimes referred to assparse propertiesin contrast toabundant properties.[11][12]Quantitative properties, like temperature or mass, which occur in degrees, pose another problem for thenumerical conception.[3]The reason for this is that e.g. a body with 40 °C resembles another body with 41 °C even though the two bodies do not have their temperature in common. The problem of quantitative properties is better handled by themetric conceptionof similarity, which posits that there are certain dimensions of similarity concerning different respects, e.g. color, shape or weight, which constitute the axes of one unifiedmetric space.[11][3]This can be visualized in analogy to three-dimensional physical space, the axes of which are usually labeled withx,yandz.[13]In both the qualitative and the physical metric space, the total distance is determined by the relative distances within each axis. The metric space thus constitutes a manner of aggregating variousrespectivedegrees of similarity into oneoveralldegree of similarity.[14][13]The corresponding function is sometimes referred to as asimilarity measure. One problem with this outlook is that it is questionable whether the different respects are commensurable with each other in the sense that an increase in one type can make up for a lack in another type.[14]Even if this should be allowed, there is still the question of how to determine the factor of correlation between degrees of different respects.[3]Any such factor would seem to be artificial,[13]as can be seen, for example, when considering possible responses to the following case: "[l]et one person resemble you more closely, overall, than someone else does. And let him become a bit less like you in respect of his weight by gaining a little. Now answer these questions: How much warmer or cooler should he become to restore the original overall comparison? How much more similar in respect of his height?"[14]This problem does not arise for physical distance, which involves commensurable dimensions and which can be kept constant, for example, by moving the right amount north or south, after having moved a certain distance to the west.[14][13]Another objection to the metric conception of similarity comes from empirical research suggesting that similarity judgments do not obey theaxioms of metric space. For example, people are more likely to accept that "North Korea is similar to China" than that "China is similar to North Korea", thereby denying the axiom of symmetry.[10][3] Another way to define similarity, best known from geometry, is in terms oftransformations. According to this definition, two objects are similar if there exists a certain type of transformation that translates one object into the other object while leaving certain properties essential for similarity intact.[9][5]For example, in geometry,two triangles are similarif there is a transformation, involving nothing but scaling, rotating, displacement and reflection, which maps one triangle onto the other. The property kept intact by these transformations concerns the angles of the two triangles.[9] Judgments of similarity come in two forms: referring torespective similarity, which is relative to one respect or feature, or tooverall similarity, which expresses the degree of resemblance between two objects all things considered.[3][4][13]For example, a basketball resembles the sun with respect to its round shape but they are not very similar overall. It is usually assumed that overall similarity depends on respective similarity, e.g. that an orange is overall similar to an apple because they are similar in respect to size, shape, color, etc. This means that two objects cannot differ in overall similarity without differing in respective similarity.[3]But there is no general agreement whether overall similarity can be fully analyzed by aggregating similarity in all respects.[14][13]If this was true then it should be possible to keep the degree of similarity between the apple and the orange constant despite a change to the size of the apple by making up for it through a change in color, for example. But that this is possible, i.e. that increasing the similarity in another respect can make up for the lack of similarity in one respect, has been denied by some philosophers.[14] One special form of respective resemblance isperfect respective resemblance, which is given when two objects share exactly the same property, likebeing an electronorbeing made entirely of iron.[3]A weaker version ofrespective resemblanceis possible forquantitative properties, like mass or temperature, which involve a degree. Close degrees resemble each other without constituting shared properties.[3][4]In this way, a pack of rice weighing 1000 grams resembles a honey melon weighing 1010 grams in respect to mass but not in virtue of sharing property. This type of respective resemblance and its impact on overall similarity gets further complicated for multi-dimensional quantities, like colors or shapes.[3] Identityis the relation each thing bears only to itself.[15]Bothidentityandexact similarityorindiscernibilityare expressed by the word "same".[16][17]For example, consider two children with the same bicycles engaged in a race while their mother is watching. The two children have thesamebicycle in one sense (exact similarity) and thesamemother in another sense (identity).[16]The two senses ofsamenessare linked by two principles: the principle ofindiscernibility of identicalsand the principle ofidentity of indiscernibles. The principle ofindiscernibility of identicalsis uncontroversial and states that if two entities are identical with each other then they exactly resemble each other.[17]The principle ofidentity of indiscernibles, on the other hand, is more controversial in making the converse claim that if two entities exactly resemble each other then they must be identical.[17]This entails that "no two distinct things exactly resemble each other".[18]A well-known counterexample comes fromMax Black, who describes a symmetrical universe consisting of only two spheres with the same features.[19]Black argues that the two spheres are indiscernible but not identical, thereby constituting a violation of the principle ofidentity of indiscernibles.[20] Theproblem of universalsis the problem to explain how different objects can have a feature in common and thereby resemble each other in this respect, for example, how water and oil can share the feature ofbeing liquid.[21][22]Therealist solutionposits an underlyinguniversalthat is instantiated by both objects and thus grounds their similarity.[16]This is rejected bynominalists, who deny the existence of universals. Of special interest to the concept of similarity is the position known asresemblance nominalism, which treats resemblance between objects as a fundamental fact.[22][16]So on this view, two objects have a feature in common because they resemble each other, not the other way round, as is commonly held.[23]This way, theproblem of universalsis solved without the need of positing shared universals.[22]One objection to this solution is that it fails to distinguish between coextensive properties. Coextensive properties are different properties that always come together, likehaving a heartandhaving a kidney. But in resemblance nominalism, they are treated as one property since all their bearers belong to the same resemblance class.[24]Another counter-argument is that this approach does not fully solve theproblem of universalssince it seemingly introduces a new universal: resemblance itself.[22][3] Counterfactualsare sentences that express what would have been true under different circumstances, for example, "[i]f Richard Nixon had pushed the button, there would have been a nuclear war".[25]Theories of counterfactuals try to determine the conditions under which counterfactuals are true or false. The most well-known approach, due toRobert StalnakerandDavid Lewis, proposes to analyze counterfactuals in terms of similarity betweenpossible worlds.[7][26]A possible world is a way things could have been. According to the Stalnaker-Lewis-account, theantecedentor the if-clause picks out one possible world, in the example above, the world in which Nixon pushed the button. The counterfactual is true if theconsequentor the then-clause is true in the selected possible world.[26][7]The problem with the account sketched so far is that there are various possible worlds that could be picked out by theantecedent. Lewis proposes that the problem is solved throughoverall similarity: only the possible world most similar to the actual world is selected.[25]A "system of weights" in the form of a set of criteria is to guide us in assessing the degree of similarity between possible worlds.[7]For example, avoiding widespread violations of the laws of nature ("big miracles") is considered an important factor for similarity while proximity in particular facts has little impact.[7]One objection to Lewis' approach is that the proposed system of weights captures not so much our intuition concerning similarity between worlds but instead aims to be consonant with our counterfactual intuitions.[27]But considered purely in terms of similarity, the most similar world in the example above is arguably the world in which Nixon pushes the button, nothing happens and history continues just like it actually did.[27] Depictionis the relation that pictures bear to the things they represent, for example, the relation between a photograph ofAlbert Einsteinand Einstein himself. Theories of depiction aim to explain how pictures are able to refer.[28]The traditional account, originally suggested byPlato, explains depiction in terms ofmimesisor similarity.[29][30]So the photograph depicts Einstein because it resembles him in respect to shape and color. In this regard, pictures are different from linguisticsigns, which are arbitrarily related to their referents for most part.[28][30]Pictures can indirectly represent abstract concepts, like God or love, by resembling concrete things, like a bearded man or a heart, which we associate with the abstract concept in question.[29]Despite their intuitive appeal, resemblance-accounts of depiction face various problems. One problem comes from the fact that similarity is a symmetric relation, so ifais similar tobthenbhas to be similar toa.[28]But Einstein does not depict his photograph despite being similar to it. Another problem comes from the fact that non-existing things, like dragons, can be depicted. So a picture of a dragon shows a dragon even though there are no dragons that could be similar to the picture.[28][30]Defenders of resemblance-theories try to avoid these counter-examples by moving to more sophisticated formulations involving other concepts beside resemblance.[29] Ananalogyis a comparison between two objects based on similarity.[31]Arguments from analogyinvolve inferences from information about a known object (the source) to the features of an unknown object (the target) based on similarity between the two objects.[32]Arguments from analogy have the following form:ais similar tobandahas featureF, thereforebprobably also has featureF.[31][33]Using this scheme, it is possible to infer from the similarity between rats (a) and humans (b) and from the fact that birth control pills affect the brain development (F) of rats that they may also affect the brain development of humans.[34]Arguments from analogy aredefeasible: they make their conclusion rationally compelling but do not ensure its truth.[35]The strength of such arguments depends, among other things, on the degree of similarity between thesourceand thetargetand on the relevance of this similarity to the inferred feature.[34]Important arguments from analogy within philosophy include theargument from design(the universe resembles a machine and machines have intelligent designers, therefore the universe has an intelligent designer) and the argument from analogy concerning the existence ofother minds(my body is similar to other human bodies and I have a mind, therefore they also have minds).[32][36][37][38] The termfamily resemblancerefers toLudwig Wittgenstein's idea that certain concepts cannot be defined in terms ofnecessary and sufficient conditionswhich refer toessential featuresshared by all examples.[39][40]Instead, the use of one concept for all its cases is justified byresemblance relationsbased on their shared features. These relations form "a network of overlapping but discontinuous similarities, like the fibres in a rope".[40]One of Wittgenstein's favorite examples is the concept of games, which includes card games, board games, ball games, etc. Different games share various features with each other, likebeing amusing, involvingwinningandlosing, depending onskillorluck, etc.[41]According to Wittgenstein, to be a game is to be sufficiently similar to other games even though there are no properties essential to every game.[39]These considerations threaten to render traditional attempts of discovering analytic definitions futile, such as for concepts like proposition, name, number, proof or language.[40]Prototype theoryis formulated based on these insights. It holds that whether an entity belongs to a conceptual category is determined by how close or similar this entity is to theprototypeorexemplarof this concept.[42][43]
https://en.wikipedia.org/wiki/Similarity_(philosophy)
In descriptivestatisticsandchaos theory, arecurrence plot(RP) is a plot showing, for each momentj{\displaystyle j}in time, the times at which the state of adynamical systemreturns to the previous state ati{\displaystyle i}, i.e., when thephase spacetrajectory visits roughly the same area in the phase space as at timej{\displaystyle j}. In other words, it is a plot of showingi{\displaystyle i}on a horizontal axis andj{\displaystyle j}on a vertical axis, wherex→{\displaystyle {\vec {x}}}is the state of the system (or its phase space trajectory). Natural processes can have a distinct recurrent behaviour, e.g. periodicities (asseasonalorMilankovich cycles), but also irregular cyclicities (asEl NiñoSouthern Oscillation, heart beat intervals). Moreover, the recurrence of states, in the meaning that states are again arbitrarily close after some time ofdivergence, is a fundamental property ofdeterministicdynamical systemsand is typical fornonlinearorchaotic systems(cf.Poincaré recurrence theorem). The recurrence of states in nature has been known for a long time and has also been discussed in early work (e.g.Henri Poincaré1890). One way to visualize the recurring nature of states by their trajectory through aphase spaceis the recurrence plot, introduced by Eckmann et al. (1987).[1]Often, the phase space does not have a low enough dimension (two or three) to be pictured, since higher-dimensional phase spaces can only be visualized by projection into the two or three-dimensional sub-spaces. One frequently used tool to study the behaviour of such phase space trajectories is then thePoincaré map. Another tool, is the recurrence plot, which enables us to investigate many aspects of them-dimensional phase space trajectory through a two-dimensional representation. At arecurrencethe trajectory returns to a location (state) in phase space it has visited before up to a small errorε{\displaystyle \varepsilon }. The recurrence plot represents the collection of pairs of times of such recurrences, i.e., the set of(i,j){\displaystyle (i,j)}withx→(i)≈x→(j){\displaystyle {\vec {x}}(i)\approx {\vec {x}}(j)}, withi{\displaystyle i}andj{\displaystyle j}discrete points of time andx→(i){\displaystyle {\vec {x}}(i)}the state of the system at timei{\displaystyle i}(location of the trajectory at timei{\displaystyle i}). Mathematically, this is expressed by the binary recurrence matrix where‖⋅‖{\displaystyle \|\cdot \|}is a norm andε{\displaystyle \varepsilon }the recurrence threshold. An alternative, more formal expression is using theHeaviside step functionR(i,j)=Θ(ε−Di,j){\displaystyle R(i,j)=\Theta (\varepsilon -D_{i,j})}withDi,j=‖x→(i)−x→(j)‖{\displaystyle D_{i,j}=\|{\vec {x}}(i)-{\vec {x}}(j)\|}the norm of distance vector betweenx→(i){\displaystyle {\vec {x}}(i)}andx→(j){\displaystyle {\vec {x}}(j)}. Alternative recurrence definitions consider different distancesDi,j{\displaystyle D_{i,j}}, e.g.,angular distance,fuzzy distance, oredit distance.[2] The recurrence plot visualisesR{\displaystyle \mathbf {R} }with coloured (mostly black) dot at coordinates(i,j){\displaystyle (i,j)}ifR(i,j)=1{\displaystyle R(i,j)=1}, with time at thex{\displaystyle x}- andy{\displaystyle y}-axes. If only a univariatetime seriesu(t){\displaystyle u(t)}is available, the phase space can be reconstructed, e.g., by using a time delay embedding (seeTakens' theorem): whereu(i){\displaystyle u(i)}is the time series (witht=iΔt{\displaystyle t=i\Delta t}andΔt{\displaystyle \Delta t}the sampling time),m{\displaystyle m}the embedding dimension andτ{\displaystyle \tau }the time delay. However, phase space reconstruction is not essential part of the recurrence plot (although often stated in literature), because it is based on phase space trajectories which could be derived from the system's variables directly (e.g., from the three variables of theLorenz system) or from multivariate data. The visual appearance of a recurrence plot gives hints about the dynamics of the system. Caused by characteristic behaviour of the phase space trajectory, a recurrence plot contains typical small-scale structures, as single dots, diagonal lines and vertical/horizontal lines (or a mixture of the latter, which combines to extended clusters). The large-scale structure, also calledtexture, can be visually characterised byhomogenous,periodic,driftordisrupted. For example, the plot can show if the trajectory is strictly periodic with periodT{\displaystyle T}, then all such pairs of times will be separated by a multiple ofT{\displaystyle T}and visible as diagonal lines. The small-scale structures in recurrence plots contain information about certain characteristics of the dynamics of the underlying system. For example, the length of the diagonal lines visible in the recurrence plot are related to the divergence of phase space trajectories, thus, can represent information about the chaoticity.[3]Therefore, therecurrence quantification analysisquantifies the distribution of these small-scale structures.[4][5][6]This quantification can be used to describe the recurrence plots in a quantitative way. Applications are classification, predictions, nonlinear parameter estimation, and transition analysis. In contrast to the heuristic approach of the recurrence quantification analysis, which depends on the choice of the embedding parameters, somedynamical invariantsascorrelation dimension,K2 entropyormutual information, which are independent on the embedding, can also be derived from recurrence plots. The base for these dynamical invariants are the recurrence rate and the distribution of the lengths of the diagonal lines.[3]More recent applications use recurrence plots as a tool for time series imaging in machine learning approaches and studying spatio-temporal recurrences.[2] Close returns plots are similar to recurrence plots. The difference is that the relative time between recurrences is used for they{\displaystyle y}-axis (instead of absolute time).[6] The main advantage of recurrence plots is that they provide useful information even for short and non-stationary data, where other methods fail. Multivariate extensions of recurrence plots were developed ascross recurrence plotsandjoint recurrence plots. Cross recurrence plots consider the phase space trajectories of two different systems in the same phase space:[7] The dimension of both systems must be the same, but the number of considered states (i.e. data length) can be different. Cross recurrence plots compare the occurrences ofsimilar statesof two systems. They can be used in order to analyse the similarity of the dynamical evolution between two different systems, to look for similar matching patterns in two systems, or to study the time-relationship of two similar systems, whose time-scale differ.[8] Joint recurrence plots are theHadamard productof the recurrence plots of the considered sub-systems,[9]e.g. for two systemsx→{\displaystyle {\vec {x}}}andy→{\displaystyle {\vec {y}}}the joint recurrence plot is In contrast to cross recurrence plots, joint recurrence plots compare the simultaneous occurrence ofrecurrencesin two (or more) systems. Moreover, the dimension of the considered phase spaces can be different, but the number of the considered states has to be the same for all the sub-systems. Joint recurrence plots can be used in order to detectphase synchronisation.
https://en.wikipedia.org/wiki/Recurrence_plot
Inmathematical logic, thearithmetical hierarchy,arithmetic hierarchyorKleene–Mostowski hierarchy(after mathematiciansStephen Cole KleeneandAndrzej Mostowski) classifies certainsetsbased on the complexity offormulasthatdefinethem. Any set that receives a classification is calledarithmetical. The arithmetical hierarchy was invented independently by Kleene (1943) and Mostowski (1946).[1] The arithmetical hierarchy is important incomputability theory,effective descriptive set theory, and the study offormal theoriessuch asPeano arithmetic. TheTarski–Kuratowski algorithmprovides an easy way to get an upper bound on the classifications assigned to a formula and the set it defines. Thehyperarithmetical hierarchyand theanalytical hierarchyextend the arithmetical hierarchy to classify additional formulas and sets. The arithmetical hierarchy assigns classifications to the formulas in the language offirst-order arithmetic. The classifications are denotedΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}fornatural numbersn(including 0). The Greek letters here arelightfacesymbols, which indicates that the formulas do not containset parameters.[clarification needed] If a formulaϕ{\displaystyle \phi }islogically equivalentto a formula having no unbounded quantifiers, i.e. in which all quantifiers arebounded quantifiersthenϕ{\displaystyle \phi }is assigned the classificationsΣ00{\displaystyle \Sigma _{0}^{0}}andΠ00{\displaystyle \Pi _{0}^{0}}. The classificationsΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}are defined inductively for every natural numbernusing the following rules: AΣn0{\displaystyle \Sigma _{n}^{0}}formula is equivalent to a formula that begins with someexistential quantifiersand alternatesn−1{\displaystyle n-1}times between series of existential anduniversal quantifiers; while aΠn0{\displaystyle \Pi _{n}^{0}}formula is equivalent to a formula that begins with some universal quantifiers and alternates analogously. Because every first-order formula has aprenex normal form, every formula is assigned at least one classification. Because redundant quantifiers can be added to any formula, once a formula is assigned the classificationΣn0{\displaystyle \Sigma _{n}^{0}}orΠn0{\displaystyle \Pi _{n}^{0}}it will be assigned the classificationsΣm0{\displaystyle \Sigma _{m}^{0}}andΠm0{\displaystyle \Pi _{m}^{0}}for everym>n. The only relevant classification assigned to a formula is thus the one with the leastn; all the other classifications can be determined from it. A setXof natural numbers is defined by a formulaφin the language ofPeano arithmetic(the first-order language with symbols "0" for zero, "S" for the successor function, "+" for addition, "×" for multiplication, and "=" for equality), if the elements ofXare exactly the numbers that satisfyφ. That is, for all natural numbersn, wheren_{\displaystyle {\underline {n}}}is the numeral in the language of arithmetic corresponding ton{\displaystyle n}. A set is definable in first-order arithmetic if it is defined by some formula in the language of Peano arithmetic. Each setXof natural numbers that is definable in first-order arithmetic is assigned classifications of the formΣn0{\displaystyle \Sigma _{n}^{0}},Πn0{\displaystyle \Pi _{n}^{0}}, andΔn0{\displaystyle \Delta _{n}^{0}}, wheren{\displaystyle n}is a natural number, as follows. IfXis definable by aΣn0{\displaystyle \Sigma _{n}^{0}}formula thenXis assigned the classificationΣn0{\displaystyle \Sigma _{n}^{0}}. IfXis definable by aΠn0{\displaystyle \Pi _{n}^{0}}formula thenXis assigned the classificationΠn0{\displaystyle \Pi _{n}^{0}}. IfXis bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}thenX{\displaystyle X}is assigned the additional classificationΔn0{\displaystyle \Delta _{n}^{0}}. Note that it rarely makes sense to speak ofΔn0{\displaystyle \Delta _{n}^{0}}formulas; the first quantifier of a formula is either existential or universal. So aΔn0{\displaystyle \Delta _{n}^{0}}set is not necessarily defined by aΔn0{\displaystyle \Delta _{n}^{0}}formula in the sense of a formula that is bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}; rather, there are bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}formulas that define the set. For example, the set of odd natural numbersn{\displaystyle n}is definable by either∀k(n≠2×k){\displaystyle \forall k(n\neq 2\times k)}or∃k(n=2×k+1){\displaystyle \exists k(n=2\times k+1)}. A parallel definition is used to define the arithmetical hierarchy on finiteCartesian powersof the set of natural numbers. Instead of formulas with one free variable, formulas withkfree first-order variables are used to define the arithmetical hierarchy on sets ofk-tuplesof natural numbers. These are in fact related by the use of apairing function. The following meanings can be attached to the notation for the arithmetical hierarchy on formulas. The subscriptn{\displaystyle n}in the symbolsΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}indicates the number of alternations of blocks of universal and existential first-order quantifiers that are used in a formula. Moreover, the outermost block is existential inΣn0{\displaystyle \Sigma _{n}^{0}}formulas and universal inΠn0{\displaystyle \Pi _{n}^{0}}formulas. The superscript0{\displaystyle 0}in the symbolsΣn0{\displaystyle \Sigma _{n}^{0}},Πn0{\displaystyle \Pi _{n}^{0}}, andΔn0{\displaystyle \Delta _{n}^{0}}indicates the type of the objects being quantified over. Type 0 objects are natural numbers, and objects of typei+1{\displaystyle i+1}are functions that map the set of objects of typei{\displaystyle i}to the natural numbers. Quantification over higher type objects, such as functions from natural numbers to natural numbers, is described by a superscript greater than 0, as in theanalytical hierarchy. The superscript 0 indicates quantifiers over numbers, the superscript 1 would indicate quantification over functions from numbers to numbers (type 1 objects), the superscript 2 would correspond to quantification over functions that take a type 1 object and return a number, and so on. Just as we can define what it means for a setXto berecursiverelative to another setYby allowing the computation definingXto consultYas anoraclewe can extend this notion to the whole arithmetic hierarchy and define what it means forXto beΣn0{\displaystyle \Sigma _{n}^{0}},Δn0{\displaystyle \Delta _{n}^{0}}orΠn0{\displaystyle \Pi _{n}^{0}}inY, denoted respectivelyΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}},Δn0,Y{\displaystyle \Delta _{n}^{0,Y}}andΠn0,Y{\displaystyle \Pi _{n}^{0,Y}}. To do so, fix a set of natural numbersYand add apredicatefor membership ofYto the language of Peano arithmetic. We then say thatXis inΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}if it is defined by aΣn0{\displaystyle \Sigma _{n}^{0}}formula in this expanded language. In other words,XisΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}if it is defined by aΣn0{\displaystyle \Sigma _{n}^{0}}formula allowed to ask questions about membership ofY. Alternatively one can view theΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}sets as those sets that can be built starting with sets recursive inYand alternately takingunionsandintersectionsof these sets up tontimes. For example, letYbe a set of natural numbers. LetXbe the set of numbersdivisibleby an element ofY. ThenXis defined by the formulaϕ(n)=∃m∃t(Y(m)∧m×t=n){\displaystyle \phi (n)=\exists m\exists t(Y(m)\land m\times t=n)}soXis inΣ10,Y{\displaystyle \Sigma _{1}^{0,Y}}(actually it is inΔ00,Y{\displaystyle \Delta _{0}^{0,Y}}as well, since we could bound both quantifiers byn). Arithmetical reducibility is an intermediate notion betweenTuring reducibilityandhyperarithmetic reducibility. A set isarithmetical(alsoarithmeticandarithmetically definable) if it is defined by some formula in the language of Peano arithmetic. EquivalentlyXis arithmetical ifXisΣn0{\displaystyle \Sigma _{n}^{0}}orΠn0{\displaystyle \Pi _{n}^{0}}for some natural numbern. A setXis arithmetical ina setY, denotedX≤AY{\displaystyle X\leq _{A}Y}, ifXis definable as some formula in the language of Peano arithmetic extended by a predicate for membership ofY. Equivalently,Xis arithmetical inYifXis inΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}orΠn0,Y{\displaystyle \Pi _{n}^{0,Y}}for some natural numbern. A synonym forX≤AY{\displaystyle X\leq _{A}Y}is:Xisarithmetically reducibletoY. The relationX≤AY{\displaystyle X\leq _{A}Y}isreflexiveandtransitive, and thus the relation≡A{\displaystyle \equiv _{A}}defined by the rule is anequivalence relation. Theequivalence classesof this relation are called thearithmetic degrees; they arepartially orderedunder≤A{\displaystyle \leq _{A}}. TheCantor space, denoted2ω{\displaystyle 2^{\omega }}, is the set of all infinite sequences of 0s and 1s; theBaire space, denotedωω{\displaystyle \omega ^{\omega }}orN{\displaystyle {\mathcal {N}}}, is the set of all infinite sequences of natural numbers. Note that elements of the Cantor space can be identified with sets of natural numbers and elements of the Baire space with functions from natural numbers to natural numbers. The ordinary axiomatization ofsecond-order arithmeticuses a set-based language in which the set quantifiers can naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classificationΣn0{\displaystyle \Sigma _{n}^{0}}if it is definable by aΣn0{\displaystyle \Sigma _{n}^{0}}formula. The set is assigned the classificationΠn0{\displaystyle \Pi _{n}^{0}}if it is definable by aΠn0{\displaystyle \Pi _{n}^{0}}formula. If the set is bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}then it is given the additional classificationΔn0{\displaystyle \Delta _{n}^{0}}. For example, letO⊆2ω{\displaystyle O\subseteq 2^{\omega }}be the set of all infinite binary strings that aren't all 0 (or equivalently the set of all non-empty sets of natural numbers). AsO={X∈2ω|∃n(X(n)=1)}{\displaystyle O=\{X\in 2^{\omega }|\exists n(X(n)=1)\}}we see thatO{\displaystyle O}is defined by aΣ10{\displaystyle \Sigma _{1}^{0}}formula and hence is aΣ10{\displaystyle \Sigma _{1}^{0}}set. Note that while both the elements of the Cantor space (regarded as sets of natural numbers) and subsets of the Cantor space are classified in arithmetic hierarchies, these are not the same hierarchy. In fact the relationship between the two hierarchies is interesting and non-trivial. For instance theΠn0{\displaystyle \Pi _{n}^{0}}elements of the Cantor space are not (in general) the same as the elementsX{\displaystyle X}of the Cantor space so that{X}{\displaystyle \{X\}}is aΠn0{\displaystyle \Pi _{n}^{0}}subset of the Cantor space. However, many interesting results relate the two hierarchies. There are two ways that a subset of Baire space can be classified in the arithmetical hierarchy. A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of Baire space or Cantor space, using formulas with several free variables. The arithmetical hierarchy can be defined on anyeffective Polish space; the definition is particularly simple for Cantor space and Baire space because they fit with the language of ordinary second-order arithmetic. Note that we can also define the arithmetic hierarchy of subsets of the Cantor and Baire spaces relative to some set of natural numbers. In fact boldfaceΣn0{\displaystyle \mathbf {\Sigma } _{n}^{0}}is just the union ofΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}for all sets of natural numbersY. Note that the boldface hierarchy is just the standard hierarchy ofBorel sets. It is possible to define the arithmetical hierarchy of formulas using a language extended with a function symbol for eachprimitive recursive function. This variation slightly changes the classification ofΣ00=Π00=Δ00{\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}=\Delta _{0}^{0}}, sinceusing primitive recursive functions in first-order Peano arithmeticrequires, in general, an unbounded existential quantifier, and thus some sets that are inΣ00{\displaystyle \Sigma _{0}^{0}}by this definition are strictly inΣ10{\displaystyle \Sigma _{1}^{0}}by the definition given in the beginning of this article. The classΣ10{\displaystyle \Sigma _{1}^{0}}and thus all higher classes in the hierarchy remain unaffected. A more semantic variation of the hierarchy can be defined on all finitary relations on the natural numbers; the following definition is used. Every computable relation is defined to beΣ00=Π00=Δ00{\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}=\Delta _{0}^{0}}. The classificationsΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}are defined inductively with the following rules. This variation slightly changes the classification of some sets. In particular,Σ00=Π00=Δ00{\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}=\Delta _{0}^{0}}, as a class of sets (definable by the relations in the class), is identical toΔ10{\displaystyle \Delta _{1}^{0}}as the latter was formerly defined. It can be extended to cover finitary relations on the natural numbers, Baire space, and Cantor space. The following properties hold for the arithmetical hierarchy of sets of natural numbers and the arithmetical hierarchy of subsets of Cantor or Baire space. IfSis aTuring computable set, then bothSand itscomplementare recursively enumerable (ifTis a Turing machine giving 1 for inputs inSand 0 otherwise, we may build a Turing machine halting only on the former, and another halting only on the latter). ByPost's theorem, bothSand its complement are inΣ10{\displaystyle \Sigma _{1}^{0}}. This means thatSis both inΣ10{\displaystyle \Sigma _{1}^{0}}and inΠ10{\displaystyle \Pi _{1}^{0}}, and hence it is inΔ10{\displaystyle \Delta _{1}^{0}}. Similarly, for every setSinΔ10{\displaystyle \Delta _{1}^{0}}, bothSand its complement are inΣ10{\displaystyle \Sigma _{1}^{0}}and are therefore (byPost's theorem) recursively enumerable by some Turing machinesT1andT2, respectively. For every numbern, exactly one of these halts. We may therefore construct a Turing machineTthat alternates betweenT1andT2, halting and returning 1 when the former halts or halting and returning 0 when the latter halts. ThusThalts on everynand returns whether it is inS; soSis computable. The Turing computable sets of natural numbers are exactly the sets at levelΔ10{\displaystyle \Delta _{1}^{0}}of the arithmetical hierarchy. The recursively enumerable sets are exactly the sets at levelΣ10{\displaystyle \Sigma _{1}^{0}}. Nooracle machineis capable of solving its ownhalting problem(a variation of Turing's proof applies). The halting problem for aΔn0,Y{\displaystyle \Delta _{n}^{0,Y}}oracle in fact sits inΣn+10,Y{\displaystyle \Sigma _{n+1}^{0,Y}}. Post's theoremestablishes a close connection between the arithmetical hierarchy of sets of natural numbers and theTuring degrees. In particular, it establishes the following facts for alln≥ 1: Thepolynomial hierarchyis a "feasible resource-bounded" version of the arithmetical hierarchy in which polynomial length bounds are placed on the numbers involved (or, equivalently, polynomial time bounds are placed on the Turing machines involved). It gives a finer classification of some sets of natural numbers that are at levelΔ10{\displaystyle \Delta _{1}^{0}}of the arithmetical hierarchy.
https://en.wikipedia.org/wiki/Arithmetical_hierarchy
Inphysics, theBekenstein bound(named afterJacob Bekenstein) is an upper limit on thethermodynamic entropyS, orShannon entropyH, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information that is required to perfectly describe a given physical system down to the quantum level.[1]It implies that the information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy are finite. The universal form of the bound was originally found by Jacob Bekenstein in 1981 as theinequality[1][2][3]S≤2πkREℏc,{\displaystyle S\leq {\frac {2\pi kRE}{\hbar c}},}whereSis theentropy,kis theBoltzmann constant,Ris theradiusof aspherethat can enclose the given system,Eis the totalmass–energyincluding anyrest masses,ħis thereduced Planck constant, andcis thespeed of light. Note that while gravity plays a significant role in its enforcement, the expression for the bound does not contain thegravitational constantG, and so, it ought to apply toquantum field theory in curved spacetime. TheBekenstein–Hawking boundary entropyof three-dimensionalblack holesexactly saturates the bound. TheSchwarzschild radiusis given byrs=2GMc2,{\displaystyle r_{\rm {s}}={\frac {2GM}{c^{2}}},}and so the two-dimensional area of the black hole's event horizon isA=4πrs2=16πG2M2/c4,{\displaystyle A=4\pi r_{\rm {s}}^{2}={16\pi G^{2}M^{2}}/{c^{4}},}and using thePlanck lengthlP2=ℏG/c3,{\displaystyle l_{\rm {P}}^{2}=\hbar G/c^{3},}the Bekenstein–Hawking entropy isS=kA4lP2=4πkGM2ℏc.{\displaystyle S={\frac {kA}{4\ l_{\rm {P}}^{2}}}={\frac {4\pi kGM^{2}}{\hbar c}}.} One interpretation of the bound makes use of themicrocanonicalformula for entropy,S=klog⁡Ω,{\displaystyle S=k\log \Omega ,}whereΩ{\displaystyle \Omega }is the number of energyeigenstatesaccessible to the system. This is equivalent to saying that the dimension of theHilbert spacedescribing the system is[4][5]dim⁡H=exp⁡(2πREℏc).{\displaystyle \dim {\mathcal {H}}=\exp \left({\frac {2\pi RE}{\hbar c}}\right).} The bound is closely associated withblack hole thermodynamics, theholographic principleand thecovariant entropy boundof quantum gravity, and can be derived from a conjectured strong form of the latter.[4] Bekenstein derived the bound from heuristic arguments involvingblack holes. If a system exists that violates the bound, i.e., by having too much entropy, Bekenstein argued that it would be possible to violate thesecond law of thermodynamicsby lowering it into a black hole. In 1995,Ted Jacobsondemonstrated that theEinstein field equations(i.e.,general relativity) can be derived by assuming that the Bekenstein bound and thelaws of thermodynamicsare true.[6][7]However, while a number of arguments were devised which show that some form of the bound must exist in order for the laws of thermodynamics and general relativity to be mutually consistent, the precise formulation of the bound was a matter of debate until Casini's work in 2008.[2][3][8][9][10][11][12][13][14][15][16] The following is a heuristic derivation that showsS≤KkRE/ℏc{\displaystyle S\leq K{kRE}/{\hbar c}}for some constant⁠K{\displaystyle K}⁠. Showing thatK=2π{\displaystyle K=2\pi }requires a more technical analysis. Suppose we have a black hole of mass⁠M{\displaystyle M}⁠, then theSchwarzschild radiusof the black hole is⁠Rbh∼GM/c2{\displaystyle R_{\text{bh}}\sim {GM}/{c^{2}}}⁠, and the Bekenstein–Hawking entropy of the black hole is⁠∼kc3Rbh2ℏG∼kGM2/ℏc{\displaystyle \sim {\frac {kc^{3}R_{\text{bh}}^{2}}{\hbar G}}\sim {kGM^{2}}/{\hbar c}}⁠. Now take a box of energy⁠E{\displaystyle E}⁠, entropy⁠S{\displaystyle S}⁠, and side length⁠R{\displaystyle R}⁠. If we throw the box into the black hole, the mass of the black hole goes up to⁠M+E/c2{\displaystyle M+{E}/{c^{2}}}⁠, and the entropy goes up by⁠kGME/ℏc3{\displaystyle {kGME}/{\hbar c^{3}}}⁠. Since entropy does not decrease,⁠kGME/ℏc3≳S{\displaystyle {kGME}/{\hbar c^{3}}\gtrsim S}⁠. In order for the box to fit inside the black hole,⁠R≲GM/c2{\displaystyle R\lesssim {GM}/{c^{2}}}⁠. If the two are comparable,⁠R∼GM/c2{\displaystyle R\sim {GM}/{c^{2}}}⁠, then we have derived the BH bound:⁠S≲kRE/ℏc{\displaystyle S\lesssim {kRE}/{\hbar c}}⁠. A proof of the Bekenstein bound in the framework ofquantum field theorywas given in 2008 by Casini.[17]One of the crucial insights of the proof was to find a proper interpretation of the quantities appearing on both sides of the bound. Naive definitions of entropy and energy density in Quantum Field Theory suffer fromultraviolet divergences. In the case of the Bekenstein bound, ultraviolet divergences can be avoided by taking differences between quantities computed in an excited state and the same quantities computed in thevacuum state. For example, given a spatial region⁠V{\displaystyle V}⁠, Casini defines the entropy on the left-hand side of the Bekenstein bound asSV=S(ρV)−S(ρV0)=−tr(ρVlog⁡ρV)+tr(ρV0log⁡ρV0){\displaystyle S_{V}=S(\rho _{V})-S(\rho _{V}^{0})=-\mathrm {tr} (\rho _{V}\log \rho _{V})+\mathrm {tr} (\rho _{V}^{0}\log \rho _{V}^{0})}whereS(ρV){\displaystyle S(\rho _{V})}is theVon Neumann entropyof thereduced density matrixρV{\displaystyle \rho _{V}}associated withV{\displaystyle V}in the excited state⁠ρ{\displaystyle \rho }⁠, andS(ρV0){\displaystyle S(\rho _{V}^{0})}is the corresponding Von Neumann entropy for the vacuum state⁠ρ0{\displaystyle \rho ^{0}}⁠. On the right-hand side of the Bekenstein bound, a difficult point is to give a rigorous interpretation of the quantity⁠2πRE{\displaystyle 2\pi RE}⁠, whereR{\displaystyle R}is a characteristic length scale of the system andE{\displaystyle E}is a characteristic energy. This product has the same units as the generator of aLorentz boost, and the natural analog of a boost in this situation is themodular Hamiltonianof the vacuum state⁠K=−log⁡ρV0{\displaystyle K=-\log \rho _{V}^{0}}⁠. Casini defines the right-hand side of the Bekenstein bound as the difference between the expectation value of the modular Hamiltonian in the excited state and the vacuum state,KV=tr(KρV)−tr(KρV0).{\displaystyle K_{V}=\mathrm {tr} (K\rho _{V})-\mathrm {tr} (K\rho _{V}^{0}).} With these definitions, the bound readsSV≤KV,{\displaystyle S_{V}\leq K_{V},}which can be rearranged to givetr(ρVlog⁡ρV)−tr(ρVlog⁡ρV0)≥0.{\displaystyle \mathrm {tr} (\rho _{V}\log \rho _{V})-\mathrm {tr} (\rho _{V}\log \rho _{V}^{0})\geq 0.} This is simply the statement of positivity ofquantum relative entropy, which proves the Bekenstein bound. However, the modular Hamiltonian can only be interpreted as a weighted form of energy forconformal field theories, and whenV{\displaystyle V}is a sphere. This construction allows us to make sense of theCasimir effect[4]where the localized energy density islowerthan that of the vacuum, i.e. anegativelocalized energy. The localized entropy of the vacuum is nonzero, and so, the Casimir effect is possible for states with a lower localized entropy than that of the vacuum.Hawking radiationcan be explained by dumping localized negative energy into a black hole.
https://en.wikipedia.org/wiki/Bekenstein_bound
BlooPandFlooP(Bounded loopandFree loop) are simpleprogramming languagesdesigned byDouglas Hofstadterto illustrate a point in his bookGödel, Escher, Bach.[1]BlooP is aTuring-incomplete programming languagewhose main control flow structure is a boundedloop(i.e.recursionis not permitted[citation needed]). All programs in the language must terminate, and this language can only expressprimitive recursive functions.[2] FlooP is identical to BlooP except that it supports unbounded loops; it is a Turing-complete language and can express allcomputable functions. For example, it can express theAckermann function, which (not being primitive recursive) cannot be written in BlooP. Borrowing from standard terminology inmathematical logic,[3][4]Hofstadter calls FlooP's unbounded loops MU-loops. Like all Turing-complete programming languages, FlooP suffers from thehalting problem: programs might not terminate, and it is not possible, in general, to decide which programs do. BlooP and FlooP can be regarded asmodels of computation, and have sometimes been used in teaching computability.[5] The onlyvariablesareOUTPUT(the return value of the procedure) andCELL(i)(an unbounded sequence of natural-number variables, indexed by constants, as in theUnlimited Register Machine[6]). The onlyoperatorsare⇐(assignment),+(addition),×(multiplication),<(less-than),>(greater-than) and=(equals). Each program uses only a finite number of cells, but the numbers in the cells can be arbitrarily large. Data structures such as lists or stacks can be handled by interpreting the number in a cell in specific ways, that is, byGödel numberingthe possible structures. Control flow constructs include bounded loops,conditional statements,ABORTjumps out of loops, andQUITjumps out of blocks. BlooP does not permit recursion, unrestricted jumps, or anything else that would have the same effect as the unbounded loops of FlooP. Named procedures can be defined, but these can call only previously defined procedures.[7] This is not a built-in operation and (being defined on natural numbers) never gives a negative result (e.g. 2 − 3 := 0). Note thatOUTPUTstarts at 0, like all theCELLs, and therefore requires no initialization. The example below, which implements theAckermann function, relies on simulating a stack usingGödel numbering: that is, on previously defined numerical functionsPUSH,POP, andTOPsatisfyingPUSH [N, S] > 0,TOP [PUSH [N, S]] = N, andPOP [PUSH [N, S]] = S. Since an unboundedMU-LOOPis used, this is not a legal BlooP program. TheQUIT BLOCKinstructions in this case jump to the end of the block and repeat the loop, unlike theABORT, which exits the loop.[3]
https://en.wikipedia.org/wiki/BlooP_and_FlooP
In thecomputer sciencesubfield ofalgorithmic information theory, aChaitin constant(Chaitin omega number)[1]orhalting probabilityis areal numberthat, informally speaking, represents theprobabilitythat a randomly constructed program will halt. These numbers are formed from a construction due toGregory Chaitin. Although there are infinitely many halting probabilities, one for each (universal, see below) method of encoding programs, it is common to use the letterΩto refer to them as if there were only one. BecauseΩdepends on the program encoding used, it is sometimes calledChaitin's constructionwhen not referring to any specific encoding. Each halting probability is anormalandtranscendentalreal number that is notcomputable, which means that there is noalgorithmto compute its digits. Each halting probability isMartin-Löf random, meaning there is not even any algorithm which can reliably guess its digits. The definition of a halting probability relies on the existence of a prefix-free universal computable function. Such a function, intuitively, represents a program in a programming language with the property that no valid program can be obtained as a proper extension of another valid program. Suppose thatFis apartial functionthat takes one argument, a finite binary string, and possibly returns a single binary string as output. The functionFis calledcomputableif there is aTuring machinethat computes it, in the sense that for any finite binary stringsxandy,F(x) =yif and only if the Turing machine halts withyon its tape when given the inputx. The functionFis calleduniversalif for every computable functionfof a single variable there is a stringwsuch that for allx,F(wx) =f(x); herewxrepresents theconcatenationof the two stringswandx. This means thatFcan be used to simulate any computable function of one variable. Informally,wrepresents a "script" for the computable functionf, andFrepresents an "interpreter" that parses the script as a prefix of its input and then executes it on the remainder of input. ThedomainofFis the set of all inputspon which it is defined. ForFthat are universal, such apcan generally be seen both as the concatenation of a program part and a data part, and as a single program for the functionF. The functionFis called prefix-free if there are no two elementsp,p′in its domain such thatp′is a proper extension ofp. This can be rephrased as: the domain ofFis aprefix-free code(instantaneous code) on the set of finite binary strings. A simple way to enforce prefix-free-ness is to use machines whose means of input is a binary stream from which bits can be read one at a time. There is no end-of-stream marker; the end of input is determined by when the universal machine decides to stop reading more bits, and the remaining bits are not considered part of the accepted string. Here, the difference between the two notions of program mentioned in the last paragraph becomes clear: one is easily recognized by some grammar, while the other requires arbitrary computation to recognize. The domain of any universal computable function is acomputably enumerable setbut never acomputable set. The domain is alwaysTuring equivalentto thehalting problem. LetPFbe the domain of a prefix-free universal computable functionF. The constantΩFis then defined as ΩF=∑p∈PF2−|p|,{\displaystyle \Omega _{F}=\sum _{p\in P_{F}}2^{-|p|},} where|p|denotes the length of a stringp. This is aninfinite sumwhich has one summand for everypin the domain ofF. The requirement that the domain be prefix-free, together withKraft's inequality, ensures that this sum converges to areal numberbetween 0 and 1. IfFis clear from context thenΩFmay be denoted simplyΩ, although different prefix-free universal computable functions lead to different values ofΩ. Knowing the firstNbits ofΩ, one could calculate thehalting problemfor all programs of a size up toN. Let the programpfor which the halting problem is to be solved beNbits long. Indovetailingfashion, all programs of all lengths are run, until enough have halted to jointly contribute enough probability to match these firstNbits. If the programphas not halted yet, then it never will, since its contribution to the halting probability would affect the firstNbits. Thus, the halting problem would be solved forp. Because many outstanding problems in number theory, such asGoldbach's conjecture, are equivalent to solving the halting problem for special programs (which would basically search for counter-examples and halt if one is found), knowing enough bits of Chaitin's constant would also imply knowing the answer to these problems. But as the halting problem is not generally solvable, calculating any but the first few bits of Chaitin's constant is not possible for a universal language. This reduces hard problems to impossible ones, much like trying to build anoracle machine for the halting problemwould be. TheCantor spaceis the collection of all infinite sequences of 0s and 1s. A halting probability can be interpreted as themeasureof a certain subset of Cantor space under the usualprobability measureon Cantor space. It is from this interpretation that halting probabilities take their name. The probability measure on Cantor space, sometimes called the fair-coin measure, is defined so that for any binary stringxthe set of sequences that begin withxhas measure2−|x|. This implies that for each natural numbern, the set of sequencesfin Cantor space such thatf(n)= 1 has measure⁠1/2⁠, and the set of sequences whosenth element is 0 also has measure⁠1/2⁠. LetFbe a prefix-free universal computable function. The domainPofFconsists of an infinite set of binary strings P={p1,p2,…}.{\displaystyle P=\{p_{1},p_{2},\ldots \}.} Each of these stringspidetermines a subsetSiof Cantor space; the setSicontains all sequences in cantor space that begin withpi. These sets are disjoint becausePis a prefix-free set. The sum ∑p∈P2−|p|{\displaystyle \sum _{p\in P}2^{-|p|}} represents the measure of the set ⋃i∈NSi.{\displaystyle \bigcup _{i\in \mathbb {N} }S_{i}.} In this way,ΩFrepresents the probability that a randomly selected infinite sequence of 0s and 1s begins with a bit string (of some finite length) that is in the domain ofF. It is for this reason thatΩFis called a halting probability. Each Chaitin constantΩhas the following properties: Not every set that is Turing equivalent to the halting problem is a halting probability. Afinerequivalence relation, Solovay equivalence, can be used to characterize the halting probabilities among the left-c.e. reals.[4]One can show that a real number in[0,1]is a Chaitin constant (i.e. the halting probability of some prefix-free universal computable function) if and only if it is left-c.e. and algorithmically random.[4]Ωis among the fewdefinablealgorithmically random numbers and is the best-known algorithmically random number, but it is not at all typical of all algorithmically random numbers.[5] A real number is called computable if there is an algorithm which, givenn, returns the firstndigits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number. No halting probability is computable. The proof of this fact relies on an algorithm which, given the firstndigits ofΩ, solves Turing'shalting problemfor programs of length up ton. Since the halting problem isundecidable,Ωcannot be computed. The algorithm proceeds as follows. Given the firstndigits ofΩand ak≤n, the algorithm enumerates the domain ofFuntil enough elements of the domain have been found so that the probability they represent is within2−(k+1)ofΩ. After this point, no additional program of lengthkcan be in the domain, because each of these would add2−kto the measure, which is impossible. Thus the set of strings of lengthkin the domain is exactly the set of such strings already enumerated. A real number is random if the binary sequence representing the real number is analgorithmically random sequence. Calude, Hertling, Khoussainov, and Wang showed[6]that a recursively enumerable real number is an algorithmically random sequence if and only if it is a Chaitin'sΩnumber. For each specific consistent effectively representedaxiomatic systemfor thenatural numbers, such asPeano arithmetic, there exists a constantNsuch that no bit ofΩafter theNth can be proven to be 1 or 0 within that system. The constantNdepends on how theformal systemis effectively represented, and thus does not directly reflect the complexity of the axiomatic system. This incompleteness result is similar toGödel's incompleteness theoremin that it shows that no consistent formal theory for arithmetic can be complete. The firstnbits ofGregory Chaitin's constantΩare random or incompressible in the sense that they cannot be computed by a halting algorithm with fewer thann− O(1)bits. However, consider the short but never halting algorithm which systematically lists and runs all possible programs; whenever one of them halts its probability gets added to the output (initialized by zero). After finite time the firstnbits of the output will never change any more (it does not matter that this time itself is not computable by a halting program). So there is a short non-halting algorithm whose output converges (after finite time) onto the firstnbits ofΩ. In other words, theenumerablefirstnbits ofΩare highly compressible in the sense that they arelimit-computableby a very short algorithm; they are notrandomwith respect to the set of enumerating algorithms.Jürgen Schmidhuberconstructed a limit-computable "SuperΩ" which in a sense is much more random than the original limit-computableΩ, as one cannot significantly compress the SuperΩby any enumerating non-halting algorithm.[7] For an alternative "SuperΩ", theuniversality probabilityof aprefix-freeuniversal Turing machine(UTM) – namely, the probability that it remains universal even when every input of it (as abinary string) is prefixed by a random binary string – can be seen as the non-halting probability of a machine with oracle the third iteration of thehalting problem(i.e.,O(3)usingTuring jumpnotation).[8]
https://en.wikipedia.org/wiki/Omega_(computer_science)
TheChinese room argumentholds that a computer executing aprogramcannot have amind,understanding, orconsciousness,[a]regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopherJohn Searleentitled "Minds, Brains, and Programs" and published in the journalBehavioral and Brain Sciences.[1]Before Searle, similar arguments had been presented by figures includingGottfried Wilhelm Leibniz(1714),Anatoly Dneprov(1961), Lawrence Davis (1974) andNed Block(1978). Searle's version has been widely discussed in the years since.[2]The centerpiece of Searle's argument is athought experimentknown as theChinese room.[3] In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book's instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses. According to Searle, the person is just followingsyntacticrules withoutsemanticcomprehension, and neither the human nor the room as a whole understands Chinese. He contends that when computers execute programs, they are similarly just applying syntactic rules without any real understanding or thinking.[4] The argument is directed against the philosophical positions offunctionalismandcomputationalism,[5]which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls thestrong AI hypothesis:[b]"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[c] Although its proponents originally presented the argument in reaction to statements ofartificial intelligence(AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display.[6]The argument applies only to digital computers running programs and does not apply to machines in general.[4]While widely discussed, the argument has been subject to significant criticism and remains controversial amongphilosophers of mindand AI researchers.[7][8] Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine acceptsChinese charactersas input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.[4] The questions at issue are these: does the machine actuallyunderstandthe conversation, or is it justsimulatingthe ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just actingas ifit had a mind?[4] Now suppose that Searle is in a room with an English version of the program, along with sufficient pencils, paper, erasers and filing cabinets. Chinese characters are slipped in under the door, he follows the program step-by-step, which eventually instructs him to slide other Chinese characters back out under the door. If the computer had passed theTuring testthis way, it follows that Searle would do so as well, simply by running the program by hand.[4] Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, Searle would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.[4] Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word. Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind.[4] Gottfried Leibnizmade a similar argument in 1714 againstmechanism(the idea that everything that makes up a human being could, in principle, be explained in mechanical terms. In other words, that a person, including their mind, is merely a very complex machine). Leibniz used the thought experiment of expanding the brain until it was the size of a mill.[9]Leibniz found it difficult to imagine that a "mind" capable of "perception" could be constructed using only mechanical processes.[d] Peter Winchmade the same point in his bookThe Idea of a Social Science and its Relation to Philosophy(1958), where he provides an argument to show that "a man who understands Chinese is not a man who has a firm grasp of the statistical probabilities for the occurrence of the various words in the Chinese language" (p. 108). Soviet cyberneticistAnatoly Dneprovmade an essentially identical argument in 1961, in the form of the short story "The Game". In it, a stadium of people act as switches and memory cells implementing a program to translate a sentence of Portuguese, a language that none of them know.[10]The game was organized by a "Professor Zarubin" to answer the question "Can mathematical machines think?" Speaking through Zarubin, Dneprov writes "the only way to prove that machines can think is to turn yourself into a machine and examine your thinking process" and he concludes, as Searle does, "We've proven that even the most perfect simulation of machine thinking is not the thinking process itself." In 1974,Lawrence H. Davisimagined duplicating the brain using telephone lines and offices staffed by people, and in 1978Ned Blockenvisioned the entire population of China involved in such a brain simulation. This thought experiment is called theChina brain, also the "Chinese Nation" or the "Chinese Gym".[11] Searle's version appeared in his 1980 paper "Minds, Brains, and Programs", published inBehavioral and Brain Sciences.[1]It eventually became the journal's "most influential target article",[2]generating an enormous number of commentaries and responses in the ensuing decades, and Searle has continued to defend and refine the argument in many papers, popular articles and books. David Cole writes that "the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years".[12] Most of the discussion consists of attempts to refute it. "The overwhelming majority", notesBehavioral and Brain ScienceseditorStevan Harnad,[e]"still think that the Chinese Room Argument is dead wrong".[13]The sheer volume of the literature that has grown up around it inspiredPat Hayesto comment that the field ofcognitive scienceought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".[14] Searle's argument has become "something of a classic in cognitive science", according to Harnad.[13]Varol Akmanagrees, and has described the original paper as "an exemplar of philosophical clarity and purity".[15] Although the Chinese Room argument was originally presented in reaction to the statements ofartificial intelligenceresearchers, philosophers have come to consider it as an important part of thephilosophy of mind. It is a challenge tofunctionalismand thecomputational theory of mind,[f]and is related to such questions as themind–body problem, theproblem of other minds, thesymbol groundingproblem, and thehard problem of consciousness.[a] Searle identified a philosophical position he calls "strong AI": The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[c] The definition depends on the distinction between simulating a mind and actually having one. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."[22] The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founderHerbert A. Simondeclared that "there are now in the world machines that think, that learn and create".[23]Simon, together withAllen NewellandCliff Shaw, after having completed the first program that could doformal reasoning(theLogic Theorist), claimed that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind."[24]John Haugelandwrote that "AI wants only the genuine article:machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root,computers ourselves."[25] Searle also ascribes the following claims to advocates of strong AI: In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computerfunctionalism" (a term he attributes toDaniel Dennett).[5][30]Functionalism is a position in modernphilosophy of mindthat holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accuratelyrepresentfunctional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism. Stevan Harnadargues that Searle's depictions of strong AI can be reformulated as "recognizable tenets ofcomputationalism, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting."[31]Computationalism[i]is the position in the philosophy of mind which argues that the mind can be accurately described as aninformation-processingsystem. Each of the following, according to Harnad, is a "tenet" of computationalism:[34] Recent philosophical discussions have revisited the implications of computationalism for artificial intelligence. Goldstein and Levinstein explore whetherlarge language models(LLMs) likeChatGPTcan possess minds, focusing on their ability to exhibit folk psychology, including beliefs, desires, and intentions. The authors argue that LLMs satisfy several philosophical theories of mental representation, such as informational, causal, and structural theories, by demonstrating robust internal representations of the world. However, they highlight that the evidence for LLMs having action dispositions necessary for belief-desire psychology remains inconclusive. Additionally, they refute common skeptical challenges, such as the "stochastic parrots" argument and concerns over memorization, asserting that LLMs exhibit structured internal representations that align with these philosophical criteria.[35] David Chalmerssuggests that while current LLMs lack features like recurrent processing and unified agency, advancements in AI could address these limitations within the next decade, potentially enabling systems to achieve consciousness. This perspective challenges Searle's original claim that purely "syntactic" processing cannot yield understanding or consciousness, arguing instead that such systems could have authentic mental states.[36] Searle holds a philosophical position he calls "biological naturalism": that consciousness[a]and understanding require specific biological machinery that is found in brains. He writes "brains cause minds"[37]and that "actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains".[37]Searle argues that this machinery (known inneuroscienceas the "neural correlates of consciousness") must have some causal powers that permit the human experience of consciousness.[38]Searle's belief in the existence of these powers has been criticized. Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines".[4]Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using specific machinery. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur. Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to bothbehaviorismandfunctionalism(including "computer functionalism" or "strong AI").[39]Biological naturalism is similar toidentity theory(the position that mental states are "identical to" or "composed of" neurological events); however, Searle has specific technical objections to identity theory.[40][j]Searle's biological naturalism and strong AI are both opposed toCartesian dualism,[39]the classical idea that the brain and mind are made of different "substances". Indeed, Searle accuses strong AI of dualism, writing that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter".[26] Searle's original presentation emphasized understanding—that is,mental stateswithintentionality—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations, Searle has included consciousness as the real target of the argument.[5] Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.[41] David Chalmerswrites, "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.[42] Colin McGinnargues that the Chinese room provides strong evidence that thehard problem of consciousnessis fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some cleversimulationinhabits the room.[43] Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese. In Searle's words, "the computer has nothing more than I have in the case where I understand nothing".[44] Patrick Hew used the Chinese Room argument to deduce requirements from militarycommand and controlsystems if they are to preserve a commander'smoral agency. He drew an analogy between a commander in theircommand centerand the person in the Chinese Room, and analyzed it under a reading ofAristotle's notions of "compulsory" and "ignorance". Information could be "down converted" from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate 'up conversion' into meaning. Hew cited examples from theUSSVincennesincident.[45] The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields.[6]However, several concepts developed by computer scientists are essential to understanding the argument, includingsymbol processing,Turing machines,Turing completeness, and the Turing test. Searle's arguments are not usually considered an issue for AI research. The primary mission of artificial intelligence research is only to create useful systems that act intelligently and it does not matter if the intelligence is "merely" a simulation. AI researchersStuart J. RussellandPeter Norvigwrote in 2021: "We are interested in programs that behave intelligently. Individual aspects of consciousness—awareness, self-awareness, attention—can be programmed and can be part of an intelligent machine. The additional project making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[6] Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do. Searle's "strong AI hypothesis" should not be confused with "strong AI" as defined byRay Kurzweiland other futurists,[46][21]who use the term to describe machine intelligence that rivals or exceeds human intelligence—that is,artificial general intelligence,human level AIorsuperintelligence. Kurzweil is referring primarily to theamountof intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that a superintelligent machine would not necessarily have a mind and consciousness. The Chinese room implements a version of the Turing test.[48]Alan Turingintroduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[48] To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would. Computers manipulate physical objects in order to carry out calculations and do simulations. AI researchersAllen NewellandHerbert A. Simoncalled this kind of machine aphysical symbol system. It is also equivalent to theformal systemsused in the field ofmathematical logic. Searle emphasizes the fact that this kind of symbol manipulation issyntactic(borrowing a term from the study ofgrammar). The computer manipulates the symbols using a form of syntax, without any knowledge of the symbol'ssemantics(that is, theirmeaning). Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today,artificial general intelligence. They framed this as a philosophical position, thephysical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action."[49][50]The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind. Twenty-first century AI programs (such as "deep learning") do mathematical operations on huge matrixes of unidentified numbers and bear little resemblance to the symbolic processing used by AI programs at the time Searle wrote his critique in 1980.Nils Nilssondescribes systems like these as "dynamic" rather than "symbolic". Nilsson notes that these are essentially digitized representations of dynamic systems—the individual numbers do not have a specific semantics, but are insteadsamplesordata pointsfrom a dynamic signal, and it is the signal being approximated which would have semantics. Nilsson argues it is not reasonable to consider these signals as "symbol processing" in the same sense as the physical symbol systems hypothesis.[51] The Chinese room has a design analogous to that of a modern computer. It has aVon Neumann architecture, which consists of a program (the book of instructions), some memory (the papers and file cabinets), a machine that follows the instructions (the man), and a means to write symbols in memory (the pencil and eraser). A machine with this design is known intheoretical computer scienceas "Turing complete", because it has the necessary machinery to carry out any computation that a Turing machine can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given enough memory and time. Turing writes, "all digital computers are in a sense equivalent."[52]The widely acceptedChurch–Turing thesisholds that any function computable by an effective procedure is computable by a Turing machine. The Turing completeness of the Chinese room implies that it can do whatever any other digital computer can do (albeit much, much more slowly). Thus, if the Chinese room does not or can not contain a Chinese-speaking mind, then no other digital computer can contain a mind. Some replies to Searle begin by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form, according toStevan Harnad, are "no refutation (but rather an affirmation)"[53]of the Chinese room argument, because these arguments actually imply that no digital computers can have a mind.[28] There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all the abilities of a digital computer, such as being able to determine the current time.[54] Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first version in 1984. The version given below is from 1990.[55][k]The Chinese room thought experiment is intended to prove point A3.[l] He begins with three axioms: Searle posits that these lead directly to this conclusion: This much of the argument is intended to show that artificial intelligence can never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is thecomputational theory of mindcorrect?[f]He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds: Searle claims that we can derive "immediately" and "trivially"[56]that: And from this he derives the further conclusions: Refutations of Searle's argument take many different forms (see below). Computationalists and functionalists reject A3, arguing that "syntax" (as Searle describes it)canhave "semantics" if the syntax has the right functional structure. Eliminative materialists reject A2, arguing that minds don't actually have "semantics"—that thoughts and other mental phenomena are inherently meaningless but nevertheless function as if they had meaning. Replies to Searle's argument may be classified according to what they claim to show:[m] Some of the arguments (robot and brain simulation, for example) fall into multiple categories. These replies attempt to answer the question: since the man in the room does not speak Chinese, where is the mind that does? These replies address the keyontologicalissues ofmind versus bodyand simulation vs. reality. All of the replies that identify the mind in the room are versions of "the system reply". The basic version of the system reply argues that it is the "whole system" that understands Chinese.[61][n]While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains.[29] Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper"[29]without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;"[29]In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain. Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man does not understand Chinese then the system does not understand Chinese either because now "the system" and "the man" both describe exactly the same object.[29] Critics of Searle's response argue that the program has allowed the man to have two minds in one head.[who?]If we assume a "mind" is a form of information processing, then thetheory of computationcan account for two computations occurring at once, namely (1) the computation foruniversal programmability(which is the function instantiated by the person and note-taking materials independently from any particular program contents) and (2) the computation of the Turing machine that is described by the program (which is instantiated by everything including the specific program).[63]The theory of computation thus formally explains the open possibility that the second computation in the Chinese Room could entail a human-equivalent semantic understanding of the Chinese inputs. The focus belongs on the program's Turing machine rather than on the person's.[64]However, from Searle's perspective, this argument is circular. The question at issue is whether consciousness is a form of information processing, and this reply requires that we make that assumption. More sophisticated versions of the systems reply try to identify more precisely what "the system" is and they differ in exactly how they describe it. According to these replies,[who?]the "mind that speaks Chinese" could be such things as: the "software", a "program", a "running program", a simulation of the "neural correlates of consciousness", the "functional system", a "simulated mind", an "emergentproperty", or "a virtual mind". Marvin Minskysuggested a version of the system reply known as the "virtual mind reply".[o]The term "virtual" is used in computer science to describe an object that appears to exist "in" a computer (or computer network) only because software makes it appear to exist. The objects "inside" computers (including files, folders, and so on) are all "virtual", except for the computer's electronic components. Similarly, Minsky that a computer may contain a "mind" that is virtual in the same sense asvirtual machines,virtual communitiesandvirtual reality. To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".[68] Searle responds that such a mind is at best a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."[69]Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that it isn't really a calculator, because the physical attributes of the device do not matter."[70]The question is, is the human mind like the pocket calculator, essentially composed of information, where a perfect simulation of the thing justisthe thing? Or is the mind like the rainstorm, a thing in the world that is more than just its simulation, and not realizable in full by a computer simulation? For decades, this question of simulation has led AI researchers and philosophers to consider whether the term "synthetic intelligence" is more appropriate than the common description of such intelligences as "artificial." These replies provide an explanation of exactly who it is that understands Chinese. If there is somethingbesidesthe man in the room that can understand Chinese, Searle cannot argue that (1) the man does not understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.[p] These replies, by themselves, do not provide any evidence that strong AI is true, however. They do not show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese Room is an example that requires explanation, and it is difficult or impossible to explain how consciousness might "emerge" from the room or how the system would have consciousness. As Searle writes "the systems reply simply begs the question by insisting that the system must understand Chinese"[29]and thus is dodging the question or hopelessly circular. As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the Chinese room really "understands" what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle's concerns aboutintentionality,symbol groundingandsyntaxvs.semantics. Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causalconnection" between the symbols and things they represent.[72][q]Hans Moraveccomments: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[74][r] Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn'tseewhat comes into the robot's eyes."[76] Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed theknowledge basein his file cabinet. The symbols Searle manipulates are already meaningful, they are just not meaningful to him.[77][s] Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, like a book, has no understanding of its own.[t] Some have argued that the meanings of the symbols would come from a vast "background" ofcommonsense knowledgeencoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.[75][u] Searle agrees that this background exists, but he does not agree that it can be built into programs.Hubert Dreyfushas also criticized the idea that the "background" can be represented symbolically.[80] To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."[81][v] However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors thatroboticistscan supply. These arguments are all versions of the systems reply that identify a particular kind of system as being important; they identify some special technology that would create conscious understanding in a machine. (The "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.) Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker.[83][w]This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain. Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains."[26]Moreover, he argues: [I]magine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination.[85] What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections betweenaxonsanddendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.[86][x]It is also obvious that this system would be functionally equivalent to a brain, so if consciousness is a function, this system would be conscious. In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.[88][y][z](SeeShip of Theseusfor a similar thought experiment.) These arguments (and the robot or common-sense knowledge replies) identify some special technology that would help create conscious understanding in a machine. They may be interpreted in two ways: either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did) it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to create it. In the first case, where features like a robot body or a connectionist architecture are required, Searle claims that strong AI (as he understands it) has been abandoned.[ac]The Chinese room has all the elements of a Turing complete machine, and thus is capable of simulating any digital computation whatsoever. If Searle's room cannot pass the Turing test then there is no other digital technology that could pass the Turing test. If Searle's room could pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a "mind". Either way, it denies one or the other of the positions Searle thinks of as "strong AI", proving his argument. The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works."[27]If computation does not provide an explanation of the human mind, then strong AI has failed, according to Searle. Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments are being used as appeals to intuition (see next section). In fact, the room can just as easily be redesigned to weaken our intuitions.Ned Block'sBlockhead argument[94]suggests that the program could, in theory, be rewritten into a simplelookup tableofrulesof the form "if the user writesS, reply withPand goto X". At least in principle, any program can be rewritten (or "refactored") into this form, even a brain simulation.[ad]In the blockhead scenario, the entire mental state is hidden in the letter X, which represents amemory address—a number associated with the next rule. It is hard to visualize that an instant of one's conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims. On the other hand, such a lookup table would be ridiculously large (to the point of being physically impossible), and the states could therefore be overly specific. Searle argues that however the program is written or however the machine is connected to the world, the mind is being simulated by a simple step-by-step digital machine (or machines). These machines are always just like the man in the room: they understand nothing and do not speak Chinese. They are merely manipulating symbols without knowing what they mean. Searle writes: "I can have any formal program you like, but I still understand nothing."[95] The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires. Several critics believe that Searle's argument relies entirely on intuitions. Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."[96]Daniel Dennettdescribes the Chinese room argument as a misleading "intuition pump"[97]and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the obvious conclusion from it."[97] Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge", asDaniel Dennettexplains.[79] Many of these critiques emphasize speed and complexity of the human brain,[ae]which processes information at 100 billion operations per second (by some estimates).[99]Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions.[100]This brings the clarity of Searle's intuition into doubt. An especially vivid version of the speed and complexity reply is fromPaulandPatricia Churchland. They propose this analogous thought experiment: "Consider a dark room containing a man holding a bar magnet or charged object. If the man pumps the magnet up and down, then, according toMaxwell's theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when set in motion produce no luminance at all. It is inconceivable that you might constitute real luminance just by moving forces around!"[87]Churchland's point is that the problem is that he would have to wave the magnet up and down something like 450 trillion times per second in order to see anything.[101] Stevan Harnadis critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make aphase transitioninto the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"[102][af] Searle argues that his critics are also relying on intuitions, however his opponents' intuitions have no empirical basis. He writes that, in order to consider the "system reply" as remotely plausible, a person must be "under the grip of an ideology".[29]The system reply only makes sense (to Searle) if one assumes that any "system" can have consciousness, just by virtue of being a system with the right behavior and functional parts. This assumption, he argues, is not tenable given our experience of consciousness. Several replies argue that Searle's argument is irrelevant because his assumptions about the mind and consciousness are faulty. Searle believes that human beings directly experience their consciousness, intentionality and the nature of the mind every day, and that this experience of consciousness is not open to question. He writes that we must "presuppose the reality and knowability of the mental."[105]The replies below question whether Searle is justified in using his own experience of consciousness to determine that it is more than mechanical symbol processing. In particular, the other minds reply argues that we cannot use our experience of consciousness to answer questions about other minds (even the mind of a computer), the epiphenoma replies question whether we can make any argument at all about something like consciousness which can not, by definition, be detected by any experiment, and the eliminative materialist reply argues that Searle's own personal consciousness does not "exist" in the sense that Searle thinks it does. The "Other Minds Reply" points out that Searle's argument is a version of theproblem of other minds, applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.[106][ag] Nils Nilssonwrites "If a program behavesas ifit were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behavingas ifhe were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I'm willing to credit him with real thought."[108] Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply.[109]He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."[110]TheTuring testsimply extends this "polite convention" to machines. He does not intend to solve the problem of other minds (for machines or people) and he does not think we need to.[ah] If we accept Searle's description of intentionality, consciousness, and the mind, we are forced to accept that consciousness isepiphenomenal: that it "casts no shadow" i.e. is undetectable in the outside world. Searle's "causal properties" cannot be detected by anyone outside the mind, otherwise the Chinese Room could not pass the Turing test—the people outside would be able to tell there was not a Chinese speaker in the room by detecting their causal properties. Since they cannot detect causal properties, they cannot detect the existence of the mental. Thus, Searle's "causal properties" and consciousness itself is undetectable, and anything that cannot be detected either does not exist or does not matter. Mike Aldercalls this the "Newton's Flaming Laser Sword Reply". He argues that the entire argument is frivolous, because it is non-verificationist: not only is the distinction betweensimulatinga mind andhavinga mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed to distinguish between the two.[112] Daniel Dennett provides this illustration: suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. This is aphilosophical zombie, as formulated in thephilosophy of mind. This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it is most likely that human beings (as we see them today) are actually "zombies", who nevertheless insist they are conscious. It is impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not.[113] Several philosophers argue that consciousness, as Searle describes it, does not exist.Daniel Dennettdescribes consciousness as a "user illusion".[114] This position is sometimes referred to aseliminative materialism: the view that consciousness is not a concept that can "enjoy reduction" to a strictly mechanical description, but rather is a concept that will be simplyeliminatedonce the way thematerialbrain works is fully understood, in just the same way as the concept of ademonhas already been eliminated from science rather than enjoying reduction to a strictly mechanical description. Other mental properties, such as original intentionality (also called "meaning", "content", and "semantic character"), are also commonly regarded as special properties related to beliefs and other propositional attitudes. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, do not exist. If eliminative materialism is the correct scientific account of human cognition then the assumption of the Chinese room argument that "minds have mental contents (semantics)" must be rejected.[115] Searle disagrees with this analysis and argues that "the study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to know is what distinguishes the mind from thermostats and livers."[76]He takes it as obvious that we can detect the presence of consciousness and dismisses these replies as being off the point. Margaret Bodenargued in her paper "Escaping from the Chinese Room" that even if the person in the room does not understand the Chinese, it does not mean there is no understanding in the room. The person in the room at least understands the rule book used to provide output responses. She then points out that the same applies to machine languages: a natural language sentence is understood by the programming language code that instantiates it, which in turn is understood by the lower-level compiler code, and so on. This implies that the distinction between syntax and semantics is not fixed, as Searle presupposes, but relative: the semantics of natural language is realized in the syntax of programming language; the semantics of programming language has a semantics that is realized in the syntax of compiler code. Searle's problem is a failure to assume a binary notion of understanding or not, rather than a graded one, where each system is stupider than the next.[116] Searle conclusion that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains"[26]have been sometimes described as a form of "Carbon chauvinism".[117]Steven Pinkersuggested that a response to that conclusion would be to make a counter thought experiment to the Chinese Room, where the incredulity goes the other way.[118]He brings as an example the short storyThey're Made Out of Meatwhich depicts an alien race composed of some electronic beings who upon finding Earth express disbelief that the meat brain of humans can experience consciousness and thought.[119] However, Searle himself denied being "Carbon chauvinist".[120]He said "I have not tried to show that only biological based systems like our brains can think. [...] I regard this issue as up for grabs".[121]He said that even silicon machines could theoretically have human-like consciousness and thought, if the actual physical–chemical properties of silicon could be used in a way that can produce consciousness and thought, but "until we know how the brain does it we are not in a position to try to do it artificially".[122]
https://en.wikipedia.org/wiki/Chinese_room
TheGame of Life, also known asConway's Game of Lifeor simplyLife, is acellular automatondevised by the BritishmathematicianJohn Horton Conwayin 1970.[1]It is azero-player game,[2][3]meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It isTuring completeand can simulate auniversal constructoror any otherTuring machine. The universe of the Game of Life isan infinite, two-dimensional orthogonal grid of squarecells, each of which is in one of two possible states,liveordead(orpopulatedandunpopulated, respectively). Every cell interacts with its eightneighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur: The initial pattern constitutes theseedof the system. The first generation is created by applying the above rules simultaneously to every cell in the seed, live or dead; births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called atick.[nb 1]Each generation is apure functionof the preceding one. The rules continue to be applied repeatedly to create further generations. Stanisław Ulam, while working at theLos Alamos National Laboratoryin the 1940s, studied the growth of crystals, using a simplelattice networkas his model.[7]At the same time,John von Neumann, Ulam's colleague at Los Alamos, was working on the problem ofself-replicating systems.[8]: 1Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model.[9][10]As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Von Neumann wrote a paper entitled "The general and logical theory of automata" for theHixon Symposiumin 1948.[11]Ulam was the one who suggested using adiscretesystem for creating a reductionist model of self-replication.[8]: 3[12]: xxixUlam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbours' behaviours.[13]: 8Thus was born the first system of cellular automata. Like Ulam's lattice network,von Neumann's cellular automataare two-dimensional, with his self-replicator implemented algorithmically. The result was auniversal copier and constructorworking within a cellular automaton with a small neighbourhood (only those cells that touch are neighbours; for von Neumann's cellular automata, onlyorthogonalcells), and with 29 states per cell. Von Neumann gave anexistence proofthat a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as thetessellationmodel, and is called avon Neumann universal constructor.[14] Motivated by questions in mathematical logic and in part by work onsimulation gamesby Ulam, among others,John Conwaybegan doing experiments in 1968 with a variety of different two-dimensional cellular automaton rules. Conway's initial goal was to define an interesting and unpredictable cellular automaton.[3]According toMartin Gardner, Conway experimented with different rules, aiming for rules that would allow for patterns to "apparently" grow without limit, while keeping it difficult toprovethat any given pattern would do so. Moreover, some "simple initial patterns" should "grow and change for a considerable period of time" before settling into a static configuration or a repeating loop.[1]Conway later wrote that the basic motivation for Life was to create a "universal" cellular automaton.[15][better source needed] The game made its first public appearance in the October 1970 issue ofScientific American, inMartin Gardner's "Mathematical Games" column, which was based on personal conversations with Conway. Theoretically, the Game of Life has the power of auniversal Turing machine: anything that can be computedalgorithmically can be computed within the Game of Life.[16][2]Gardner wrote, "Because of Life's analogies with the rise, fall, and alterations of a society of living organisms, it belongs to a growing class of what are called 'simulation games' (games that resemble real-life processes)."[1] Since its publication, the Game of Life has attracted much interest because of the surprising ways in which the patterns can evolve. It provides an example ofemergenceandself-organization.[3]A version of Life that incorporates random fluctuations has been used inphysicsto studyphase transitionsandnonequilibrium dynamics.[17]The game can also serve as a didacticanalogy, used to convey the somewhat counter-intuitive notion that design and organization can spontaneously emerge in the absence of a designer. For example, philosopherDaniel Dennetthas used the analogy of the Game of Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such asconsciousnessandfree will, from the relatively simple set of deterministic physical laws which might govern our universe.[18][19][20] The popularity of the Game of Life was helped by its coming into being at the same time as increasingly inexpensive computer access. The game could be run for hours on these machines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generatedfractals. For many, the Game of Life was simply a programming challenge: a fun way to use otherwise wastedCPUcycles. For some, however, the Game of Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Game of Life board.[21][22] Many different types of patterns occur in the Game of Life, which are classified according to their behaviour. Common pattern types include:still lifes, which do not change from one generation to the next;oscillators, which return to their initial state after a finite number of generations; andspaceships, which translate themselves across the grid. The earliest interesting patterns in the Game of Life were discovered without the use of computers. The simplest still lifes and oscillators were discovered while tracking the fates of various small starting configurations usinggraph paper,blackboards, and physical game boards, such as those used inGo. During this early research, Conway discovered that the R-pentominofailed to stabilize in a small number of generations. In fact, it takes 1103 generations to stabilize, by which time it has a population of 116 and has generated six escapinggliders;[23]these were the first spaceships ever discovered.[24] Frequently occurring[25][26]examples (in that they emerge frequently from a random starting configuration of cells) of the three aforementioned pattern types are shown below, with live cells shown in black and dead cells in white.Periodrefers to the number of ticks a pattern must iterate through before returning to its initial configuration. Thepulsar[27]is the most common period-3 oscillator. The great majority of naturally occurring oscillators have a period of 2, like the blinker and the toad, but oscillators of all periods are known to exist,[28][29][30]and oscillators of periods 4, 8, 14, 15, 30, and a few others have been seen to arise from random initial conditions.[31]Patterns which evolve for long periods before stabilizing are calledMethuselahs, the first-discovered of which was the R-pentomino.Diehardis a pattern that disappears after 130 generations. Starting patterns of eight or more cells can be made to die after an arbitrarily long time.[32]Acorntakes 5,206 generations to generate 633 cells, including 13 escaped gliders.[33] Conway originally conjectured that no pattern can grow indefinitely—i.e. that for any initial configuration with a finite number of living cells, the population cannot grow beyond some finite upper limit. In the game's original appearance in "Mathematical Games", Conway offered a prize of fifty dollars (equivalent to $400 in 2024) to the first person who could prove or disprove the conjecture before the end of 1970. The prize was won in November by a team from theMassachusetts Institute of Technology, led byBill Gosper; the "Gosper glider gun" produces its first glider on the 15th generation, and another glider every 30th generation from then on. For many years, this glider gun was the smallest one known.[34]In 2015, a gun called the "Simkin glider gun", which releases a glider every 120th generation, was discovered that has fewer live cells but which is spread out across a larger bounding box at its extremities.[35] Smaller patterns were later found that also exhibit infinite growth. All three of the patterns shown below grow indefinitely. The first two create a singleblock-laying switch engine: a configuration that leaves behind two-by-two still life blocks as it translates itself across the game's universe.[36]The third configuration creates two such patterns. The first has only ten live cells, which has been proven to be minimal.[37]The second fits in a five-by-five square, and the third is only one cell high. Later discoveries included otherguns, which are stationary, and which produce gliders or other spaceships;puffer trains, which move along leaving behind a trail of debris; andrakes, which move and emit spaceships.[38]Gosper also constructed the first pattern with anasymptotically optimalquadratic growth rate, called abreederorlobster, which worked by leaving behind a trail of guns. It is possible for gliders to interact with other objects in interesting ways. For example, if two gliders are shot at a block in a specific position, the block will move closer to the source of the gliders. If three gliders are shot in just the right way, the block will move farther away. Thissliding block memorycan be used to simulate acounter. It is possible to constructlogic gatessuch asAND,OR, andNOTusing gliders. It is possible to build a pattern that acts like afinite-state machineconnected to two counters. This has the same computational power as auniversal Turing machine, so the Game of Life is theoretically as powerful as any computer with unlimited memory and no time constraints; it isTuring complete.[16][2]In fact, several different programmable computer architectures[39][40]have been implemented in the Game of Life, including a pattern that simulatesTetris.[41] Until the 2010s, all known spaceships could only move orthogonally or diagonally. Spaceships which move neither orthogonally nor diagonally are commonly referred to asoblique spaceships.[42][43]On May 18, 2010, Andrew J. Wade announced the first oblique spaceship, dubbed "Gemini", that creates a copy of itself on (5,1) further while destroying its parent.[44][43]This pattern replicates in 34 million generations, and uses an instruction tape made of gliders oscillating between two stable configurations made of Chapman–Greene construction arms. These, in turn, create new copies of the pattern, and destroy the previous copy. In December 2015, diagonal versions of the Gemini were built.[45] A more specific case is aknightship, a spaceship that moves two squares left for every one square it moves down (like aknight in chess), whose existence had been predicted byElwyn Berlekampsince 1982. The first elementary knightship, Sir Robin, was discovered in 2018 by Adam P. Goucher.[46]This is the first new spaceship movement pattern for an elementary spaceship found in forty-eight years. "Elementary" means that it cannot be decomposed into smaller interacting patterns such as gliders and still lifes.[47] A pattern can contain a collection of guns that fire gliders in such a way as to construct new objects, including copies of the original pattern. Auniversal constructorcan be built which contains a Turing complete computer, and which can build many types of complex objects, including more copies of itself.[2]On November 23, 2013, Dave Greene built the firstreplicatorin the Game of Life that creates a complete copy of itself, including the instruction tape.[48]In October 2018, Adam P. Goucher finished his construction of the 0E0P metacell, a metacell capable of self-replication. This differed from previous metacells, such as the OTCA metapixel by Brice Due, which only worked with already constructed copies near them. The 0E0P metacell works by using construction arms to create copies that simulate the programmed rule.[49]The actual simulation of the Game of Life or otherMoore neighbourhoodrules is done by simulating an equivalent rule using thevon Neumann neighbourhoodwith more states.[50]The name 0E0P is short for "Zero Encoded by Zero Population", which indicates that instead of a metacell being in an "off" state simulating empty space, the 0E0P metacell removes itself when the cell enters that state, leaving a blank space.[51] Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination. The Game of Life isundecidable, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. Given that the Game of Life is Turing-complete, this is a corollary of thehalting problem: the problem of determining whether a given program will finish running or continue to run forever from an initial input.[2] From most random initial patterns of living cells on the grid, observers will find the population constantly changing as the generations tick by. The patterns that emerge from the simple rules may be considered a form ofmathematical beauty. Small isolated subpatterns with no initial symmetry tend to become symmetrical. Once this happens, the symmetry may increase in richness, but it cannot be lost unless a nearby subpattern comes close enough to disturb it. In a very few cases, the society eventually dies out, with all living cells vanishing, though this may not happen for a great many generations. Most initial patterns eventually burn out, producing either stable figures or patterns that oscillate forever between two or more states;[52][53]many also produce one or more gliders or spaceships that travel indefinitely away from the initial location. Because of the nearest-neighbour based rules, no information can travel through the grid at a greater rate than one cell per unit time, so this velocity is said to be thecellular automaton speed of lightand denotedc. Early patterns with unknown futures, such as the R-pentomino, led computer programmers to write programs to track the evolution of patterns in the Game of Life. Most of the earlyalgorithmswere similar: they represented the patterns as two-dimensional arrays in computer memory. Typically, two arrays are used: one to hold the current generation, and one to calculate its successor. Often 0 and 1 represent dead and live cells, respectively. A nestedfor loopconsiders each element of the current array in turn, counting the live neighbours of each cell to decide whether the corresponding element of the successor array should be 0 or 1. The successor array is displayed. For the next iteration, the arrays may swap roles so that the successor array in the last iteration becomes the current array in the next iteration, or one may copy the values of the second array into the first array then update the second array from the first array again. A variety of minor enhancements to this basic scheme are possible, and there are many ways to save unnecessary computation. A cell that did not change at the last time step, and none of whose neighbours changed, is guaranteed not to change at the current time step as well, so a program that keeps track of which areas are active can save time by not updating inactive zones.[54] To avoid decisions and branches in the counting loop, the rules can be rearranged from anegocentricapproach of the inner field regarding its neighbours to a scientific observer's viewpoint: if the sum of all nine fields in a given neighbourhood is three, the inner field state for the next generation will be life; if the all-field sum is four, the inner field retains its current state; and every other sum sets the inner field to death. To save memory, the storage can be reduced to one array plus two line buffers. One line buffer is used to calculate the successor state for a line, then the second line buffer is used to calculate the successor state for the next line. The first buffer is then written to its line and freed to hold the successor state for the third line. If atoroidalarray is used, a third buffer is needed so that the original state of the first line in the array can be saved until the last line is computed. In principle, the Game of Life field is infinite, but computers have finite memory. This leads to problems when the active area encroaches on the border of the array. Programmers have used several strategies to address these problems. The simplest strategy is to assume that every cell outside the array is dead. This is easy to program but leads to inaccurate results when the active area crosses the boundary. A more sophisticated trick is to consider the left and right edges of the field to be stitched together, and the top and bottom edges also, yielding atoroidalarray. The result is that active areas that move across a field edge reappear at the opposite edge. Inaccuracy can still result if the pattern grows too large, but there are no pathological edge effects. Techniques of dynamic storage allocation may also be used, creating ever-larger arrays to hold growing patterns. The Game of Life on a finite field is sometimes explicitly studied; some implementations, such asGolly, support a choice of the standard infinite field, a field infinite only in one dimension, or a finite field, with a choice of topologies such as a cylinder, a torus, or aMöbius strip. Alternatively, programmers may abandon the notion of representing the Game of Life field with a two-dimensional array, and use a different data structure, such as a vector of coordinate pairs representing live cells. This allows the pattern to move about the field unhindered, as long as the population does not exceed the size of the live-coordinate array. The drawback is that counting live neighbours becomes a hash-table lookup or search operation, slowing down simulation speed. With more sophisticated data structures this problem can also be largely solved.[citation needed] For exploring large patterns at great time depths, sophisticated algorithms such asHashlifemay be useful. There is also a method for implementation of the Game of Life and other cellular automata using arbitrary asynchronous updates while still exactlyemulatingthe behaviour of the synchronous game.[55]Source codeexamples that implement the basic Game of Life scenario in various programming languages, includingC,C++,JavaandPythoncan be found atRosetta Code.[56] Since the Game of Life's inception, new, similar cellular automata have been developed. The standard Game of Life is symbolized in rule-string notation as B3/S23. A cell is born if it has exactly three neighbours, survives if it has two or three living neighbours, and dies otherwise. The first number, or list of numbers, is what is required for a dead cell to be born. The second set is the requirement for a live cell to survive to the next generation. Hence B6/S16 means "a cell is born if there are six neighbours, and lives on if there are either one or six neighbours". Cellular automata on a two-dimensional grid that can be described in this way are known asLife-like cellular automata. Another common Life-like automaton,Highlife, is described by the rule B36/S23, because having six neighbours, in addition to the original game's B3/S23 rule, causes a birth. HighLife is best known for its frequently occurring replicators.[57][58] Additional Life-like cellular automata exist. The vast majority of these 218different rules[59]produce universes that are either too chaotic or too desolate to be of interest, but a large subset do display interesting behaviour. A further generalization produces theisotropicrulespace, with 2102possible cellular automaton rules[60](the Game of Life again being one of them). These are rules that use the same square grid as the Life-like rules and the same eight-cell neighbourhood, and are likewise invariant under rotation and reflection. However, in isotropic rules, the positions of neighbour cells relative to each other may be taken into account in determining a cell's future state—not just the total number of those neighbours. Some variations on the Game of Life modify the geometry of the universe as well as the rules. The above variations can be thought of as a two-dimensional square, because the world is two-dimensional and laid out in a square grid. One-dimensional square variations, known aselementary cellular automata,[61]and three-dimensional square variations have been developed, as have two-dimensionalhexagonal and triangularvariations. A variant usingaperiodic tilinggrids has also been made.[62] Conway's rules may also be generalized such that instead of two states,liveanddead, there are three or more. State transitions are then determined either by a weighting system or by a table specifying separate transition rules for each state; for example, Mirek's Cellebration's multi-coloured Rules Table and Weighted Life rule families each include sample rules equivalent to the Game of Life. Patterns relating to fractals and fractal systems may also be observed in certain Life-like variations. For example, the automaton B1/S12 generates four very close approximations to theSierpinski trianglewhen applied to a single live cell. The Sierpinski triangle can also be observed in the Game of Life by examining the long-term growth of an infinitely long single-cell-thick line of live cells,[63]as well as in Highlife,Seeds (B2/S), andStephen Wolfram'sRule 90.[64] Immigration is a variation that is very similar to the Game of Life, except that there are twoonstates, often expressed as two different colours. Whenever a new cell is born, it takes on the on state that is the majority in the three cells that gave it birth. This feature can be used to examine interactions betweenspaceshipsand other objects within the game.[65]Another similar variation, called QuadLife, involves four different on states. When a new cell is born from three different on neighbours, it takes the fourth value, and otherwise, like Immigration, it takes the majority value.[66]Except for the variation among on cells, both of these variations act identically to the Game of Life. Various musical composition techniques use the Game of Life, especially inMIDIsequencing.[67]A variety of programs exist for creating sound from patterns generated in the Game of Life.[68][69][70] Computers have been used to follow and simulate the Game of Life since it was first publicized. When John Conway was first investigating how various starting configurations developed, he tracked them by hand using agoboard with its black and white stones. This was tedious and prone to errors. The first interactive Game of Life program was written in an early version ofALGOL 68Cfor thePDP-7byM. J. T. GuyandS. R. Bourne. The results were published in the October 1970 issue ofScientific American, along with the statement: "Without its help, some discoveries about the game would have been difficult to make."[1] A color version of the Game of Life was written by Ed Hall in 1976 forCromemcomicrocomputers, and a display from that program filled the cover of the June 1976 issue ofByte.[71]The advent of microcomputer-based color graphics from Cromemco has been credited with a revival of interest in the game.[72] Two early implementations of the Game of Life on home computers were by Malcolm Banthorpe written inBBC BASIC. The first was in the January 1984 issue ofAcorn Usermagazine, and Banthorpe followed this with a three-dimensional version in the May 1984 issue.[73]Susan Stepney, Professor of Computer Science at theUniversity of York, followed this up in 1988 with Life on the Line, a program that generated one-dimensional cellular automata.[74] There are now thousands of Game of Life programs online, so a full list will not be provided here. The following is a small selection of programs with some special claim to notability, such as popularity or unusual features. Most of these programs incorporate a graphical user interface for pattern editing and simulation, the capability for simulating multiple rules including the Game of Life, and a large library of interesting patterns in the Game of Life and other cellular automaton rules. Google implemented aneaster eggof the Game of Life in 2012. Users who search for the term are shown an implementation of the game in the search results page.[77] The visual novelAnonymous;Codeincludes a basic implementation of the Game of Life in it, which is connected to the plot of the novel. Near the end ofAnonymous;Code, a certain pattern that appears throughout the game as a tattoo on the heroine Momo Aizaki has to be entered into the Game of Life to complete the game (Kok's galaxy, the same pattern used as thelogofor the open-source Game of Life program Golly).
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physicsis a 1989 book by the mathematical physicistRoger Penrose. Penrose argues that humanconsciousnessis non-algorithmic, and thus is not capable of being modeled by a conventionalTuring machine, which includes adigital computer. Penrose hypothesizes thatquantum mechanicsplays an essential role in the understanding of humanconsciousness. Thecollapse of the quantum wavefunctionis seen as playing an important role in brain function. Most of the book is spent reviewing, for the scientifically-minded lay-reader, a plethora of interrelated subjects such asNewtonian physics,specialandgeneral relativity, the philosophy and limitations of mathematics,quantum physics,cosmology, and the nature oftime. Penrose intermittently describes how each of these bears on his developing theme: that consciousness is not "algorithmic". Only the later portions of the book address the thesis directly. Penrose states that his ideas on the nature of consciousness are speculative, and his thesis is considered erroneous by some experts in the fields of philosophy, computer science, and robotics.[1][2][3] The Emperor's New Mindattacks the claims of artificial intelligence using the physics of computing: Penrose notes that the present home of computing lies more in the tangible world of classical mechanics than in the imponderable realm of quantum mechanics. The modern computer is a deterministic system that for the most part simply executes algorithms. Penrose shows that, by reconfiguring the boundaries of a billiard table, one might make a computer in which the billiard balls act as message carriers and their interactions act as logical decisions. Thebilliard-ball computerwas first designed some years ago byEdward FredkinandTommaso Toffoliof theMassachusetts Institute of Technology. Following the publication of the book, Penrose began to collaborate withStuart Hameroffon a biological analog to quantum computation involvingmicrotubules, which became the foundation for his subsequent book,Shadows of the Mind: A Search for the Missing Science of Consciousness. Penrose won theScience Book Prizein 1990 forThe Emperor's New Mind.[4] According to an article in theAmerican Journal of Physics, Penrose incorrectly claims a barrier far away from a localized particle can affect the particle.[5]
https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind
Anenumeratoris aTuring machinewith an attached printer. The Turing machine can use that printer as an output device to print strings. Every time the Turing machine wants to add a string to the list, it sends the string to the printer. Enumerator is a type of Turing machine variant and is equivalent with Turing machine. An enumeratorE{\displaystyle E}can be defined as a 2-tape Turing machine (Multitape Turing machinewherek=2{\displaystyle k=2}) whose language is∅{\displaystyle \emptyset }. Initially,E{\displaystyle E}receives no input, and all the tapes are blank (i.e., filled with blank symbols). Newly defined symbol#∈Γ∧#∉Σ{\displaystyle \#\in \Gamma \land \#\notin \Sigma }is the delimiter that marks end of an element ofS{\displaystyle S}. The second tape can be regarded as the printer, strings on it are separated by#{\displaystyle \#}. The language enumerated by an enumeratorE{\displaystyle E}denoted byL(E){\displaystyle L(E)}is defined as set of the strings on the second tape (the printer). A language over a finite alphabet is Turing Recognizable if and only if it can be enumerated by an enumerator. This shows Turing recognizable languages are also recursively enumerable. Proof A Turing Recognizable language can be Enumerated by an Enumerator Consider a Turing MachineM{\displaystyle M}and the language accepted by it beL(M){\displaystyle L(M)}. Since the set of all possible strings over the input alphabetΣ{\displaystyle \Sigma }i.e. the Kleene ClosureΣ∗{\displaystyle \Sigma ^{*}}is a countable set, we can enumerate the strings in it ass1,s2,…,si,{\displaystyle s_{1},s_{2},\dots ,s_{i},}etc. Then the Enumerator enumerating the languageL(M){\displaystyle L(M)}will follow the steps: Now the question comes whether every string in the languageL(M){\displaystyle L(M)}will be printed by the Enumerator we constructed. For any stringw{\displaystyle w}in the languageL(M){\displaystyle L(M)}the TMM{\displaystyle M}will run finite number of steps(let it bek{\displaystyle k}forw{\displaystyle w}) to accept it. Then in thek{\displaystyle k}-th step of the Enumeratorw{\displaystyle w}will be printed. Thus the Enumerator will print every stringM{\displaystyle M}recognizes but a single string may be printed several times. An Enumerable Language is Turing Recognizable It's very easy to construct a Turing MachineM{\displaystyle M}that recognizes the enumerable languageL{\displaystyle L}. We can have two tapes. On one tape we take the input string and on the other tape, we run the enumerator to enumerate the strings in the language one after another. Once a string is printed in the second tape we compare it with the input in the first tape. If it's a match, then we accept the input, otherwise we continue. Note that if the string is not in the language, the turing machine will never halt, thus rejecting the string. Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Enumerator_(in_theoretical_computer_science)
Genetixis a virtual machine created by theoretical physicistBernard Hodsoncontaining only 34 executable instructions.[1]It was inspired by the principles ofAlan Turing[2]and allows for an entireoperating system, including aword processorand utilities, to run on 32 kilobytes.[3] "Genes" are sequences of 50 to 100pointersthat either point directly to one of the 34 basic instructions or to another gene. The 700 genes take up approximately 26 kilobytes in size all together. The "gene pool" consists of a closed section and an open section where the users can add their own made genes. Upsides are security and efficiency.[4] Hodson suggested that a simplecompilercould process any application and that the rules were so simple that an application could be developed without the need for a compiler at all.[4]He also suggested thatembedded systemsmight be a good market for Genetix.[4] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Genetix
Incomputability theory, thehalting problemis the problem of determining, from a description of an arbitrarycomputer programand an input, whether the program will finish running, or continue to run forever. The halting problem isundecidable, meaning that no generalalgorithmexists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions ofcomputabilitysince it demonstrates that some functions are mathematicallydefinablebut notcomputable. A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via aTuring machine. The proof then shows, for any programfthat might determine whether programs halt, that a "pathological" programgexists for whichfmakes an incorrect determination. Specifically,gis the program that, when called with some input, passes its own source and its input tofand does the opposite of whatfpredictsgwill do. The behavior offongshows undecidability as it means no programfwill solve the halting problem in every possible case. The halting problem is a decision problem about properties of computer programs on a fixedTuring-completemodel of computation, i.e., all programs that can be written in some givenprogramming languagethat is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long and use an arbitrary amount of storage space before halting. The question is simply whether the given program will ever halt on a particular input. For example, inpseudocode, the program does not halt; rather, it goes on forever in aninfinite loop. On the other hand, the program does halt. While deciding whether these programs halt is simple, more complex programs prove problematic. One approach to the problem might be to run the program for some number of steps and check if it halts. However, as long as the program is running, it is unknown whether it will eventually halt or run forever. Turing proved no algorithm exists that always correctly decides whether, for a given arbitrary program and input, the program halts when run with that input. The essence of Turing's proof is that any such algorithm can be made to produce contradictory output and therefore cannot be correct. Someinfinite loopscan be quite useful. For instance,event loopsare typically coded as infinite loops.[1]However, most subroutines are intended to finish.[2]In particular, in hardreal-time computing, programmers attempt to write subroutines that are not only guaranteed to finish, but are also guaranteed to finish before a given deadline.[3] Sometimes these programmers use some general-purpose (Turing-complete) programming language, but attempt to write in a restricted style—such asMISRA CorSPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.[citation needed] Other times these programmers apply therule of least power—they deliberately use a computer language that is not quite fully Turing-complete. Frequently, these are languages that guarantee all subroutines finish, such asRocq.[citation needed] The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always answers "halts" and another that always answers "does not halt". For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one. Yet neither algorithm solves the halting problem generally. There are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation, which shows that the original program halted. However, an interpreter will not halt if its input program does not halt, so this approach cannot solve the halting problem as stated; it does not successfully answer "does not halt" for programs that do not halt. The halting problem is theoretically decidable forlinear bounded automata(LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of configurations, and thus any deterministic program on it must eventually either halt or repeat a previous configuration:[4] ...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine... However, a computer with a million small parts, each with two states, would have at least 21,000,000possible states:[5] This is a 1 followed by about three hundred thousand zeroes ... Even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle: Although a machine may be finite, and finite automata "have a number of theoretical limitations":[5] ...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness [of] the state diagram may not carry a great deal of significance. It can also be decided automatically whether a nondeterministic machine with finite memory halts on none, some, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision. In April 1936,Alonzo Churchpublished his proof of the undecidability of a problem in thelambda calculus.Turing's proofwas published later, in January 1937. Since then, many other undecidable problems have been described, including the halting problem which emerged in the 1950s. Many papers and textbooks refer the definition and proof of undecidability of the halting problem to Turing's 1936 paper. However, this is not correct.[19][23]Turing did not use the terms "halt" or "halting" in any of his published works, including his 1936 paper.[24]A search of the academic literature from 1936 to 1958 showed that the first published material using the term “halting problem” wasRogers (1957). However, Rogers says he had a draft ofDavis (1958)available to him,[19]and Martin Davis states in the introduction that "the expert will perhaps find some novelty in the arrangement and treatment of topics",[25]so the terminology must be attributed to Davis.[19][23]Davis stated in a letter that he had been referring to the halting problem since 1952.[22]The usage in Davis's book is as follows:[26] "[...] we wish to determine whether or not [a Turing machine] Z, if placed in a given initial state, will eventually halt. We call this problem the halting problem for Z. [...] Theorem 2.2There exists a Turing machine whose halting problem is recursively unsolvable. A related problem is theprinting problemfor a simple Turing machine Z with respect to a symbol Si". A possible precursor to Davis's formulation is Kleene's 1952 statement, which differs only in wording:[19][20] there is no algorithm for deciding whether any given machine, when started from any given situation, eventually stops. The halting problem isTuring equivalentto both Davis's printing problem ("does a Turing machine starting from a given state ever print a given symbol?") and to the printing problem considered in Turing's 1936 paper ("does a Turing machine starting from a blank tape ever print a given symbol?"). However, Turing equivalence is rather loose and does not mean that the two problems are the same. There are machines which print but do not halt, and halt but not print. The printing and halting problems address different issues and exhibit important conceptual and technical differences. Thus, Davis was simply being modest when he said:[19] It might also be mentioned that the unsolvability of essentially these problems was first obtained by Turing. In his original proof Turing formalized the concept ofalgorithmby introducingTuring machines. However, the result is in no way specific to them; it applies equally to any other model ofcomputationthat is equivalent in its computational power to Turing machines, such asMarkov algorithms,Lambda calculus,Post systems,register machines, ortag systems. What is important is that the formalization allows a straightforward mapping of algorithms to somedata typethat thealgorithmcan operate upon. For example, if theformalismlets algorithms define functions over strings (such as Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms define functions over natural numbers (such ascomputable functions) then there should be a mapping of algorithms to natural numbers. The mapping to strings is usually the most straightforward, but strings over analphabetwithncharacterscan also be mapped to numbers by interpreting them as numbers in ann-arynumeral system. The conventional representation of decision problems is the set of objects possessing the property in question. Thehalting set represents the halting problem. This set isrecursively enumerable, which means there is a computable function that lists all of the pairs (i,x) it contains. However, the complement of this set is not recursively enumerable.[27] There are many equivalent formulations of the halting problem; any set whoseTuring degreeequals that of the halting problem is such a formulation. Examples of such sets include: Christopher Stracheyoutlined aproof by contradictionthat the halting problem is not solvable.[28][29]The proof proceeds as follows: Suppose that there exists atotalcomputable functionhalts(f)that returns true if the subroutinefhalts (when run with no inputs) and returns false otherwise. Now consider the following subroutine: halts(g)must either return true or false, becausehaltswas assumed to betotal. Ifhalts(g)returns true, thengwill callloop_foreverand never halt, which is a contradiction. Ifhalts(g)returns false, thengwill halt, because it will not callloop_forever; this is also a contradiction. Overall,gdoes the opposite of whathaltssaysgshould do, sohalts(g)can not return a truth value that is consistent with whetherghalts. Therefore, the initial assumption thathaltsis a total computable function must be false. The concept above shows the general method of the proof, but the computable functionhaltsdoes not directly take a subroutine as an argument; instead it takes the source code of a program. Moreover, the definition ofgisself-referential. A rigorous proof addresses these issues. The overall goal is to show that there is nototalcomputable functionthat decides whether an arbitrary programihalts on arbitrary inputx; that is, the following functionh(for "halts") is not computable:[30] Hereprogram irefers to theith program in anenumerationof all the programs of a fixedTuring-completemodel of computation. Possible values for a total computable functionfarranged in a 2D array. The orange cells are the diagonal. The values off(i,i) andg(i) are shown at the bottom;Uindicates that the functiongis undefined for a particular input value. The proof proceeds by directly establishing that no total computable function with two arguments can be the required functionh. As in the sketch of the concept, given any total computable binary functionf, the followingpartial functiongis also computable by some programe: The verification thatgis computable relies on the following constructs (or their equivalents): The followingpseudocodeforeillustrates a straightforward way to computeg: Becausegis partial computable, there must be a programethat computesg, by the assumption that the model of computation is Turing-complete. This program is one of all the programs on which the halting functionhis defined. The next step of the proof shows thath(e,e) will not have the same value asf(e,e). It follows from the definition ofgthat exactly one of the following two cases must hold: In either case,fcannot be the same function ash. Becausefwas anarbitrarytotal computable function with two arguments, all such functions must differ fromh. This proof is analogous toCantor's diagonal argument. One may visualize a two-dimensional array with one column and one row for each natural number, as indicated in the table above. The value off(i,j) is placed at columni, rowj. Becausefis assumed to be a total computable function, any element of the array can be calculated usingf. The construction of the functiongcan be visualized using the main diagonal of this array. If the array has a 0 at position (i,i), theng(i) is 0. Otherwise,g(i) is undefined. The contradiction comes from the fact that there is some columneof the array corresponding togitself. Now assumefwas the halting functionh, ifg(e) is defined (g(e) = 0 in this case),g(e) halts sof(e,e) = 1. Butg(e) = 0 only whenf(e,e) = 0, contradictingf(e,e) = 1. Similarly, ifg(e) is not defined, then halting functionf(e,e) = 0, which leads tog(e) = 0 underg's construction. This contradicts the assumption ofg(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable functionfcannot be the halting functionh. A typical method of proving a problemP{\displaystyle P}to be undecidable is toreducethe halting problem toP{\displaystyle P}. For example, there cannot be a general algorithm that decides whether a given statement aboutnatural numbersis true or false. The reason for this is that thepropositionstating that a certain program will halt given a certain input can be converted into an equivalent statement about natural numbers. If an algorithm could find the truth value of every statement about natural numbers, it could certainly find the truth value of this one; but that would determine whether the original program halts. Rice's theoremgeneralizes the theorem that the halting problem is unsolvable. It states that foranynon-trivial property, there is no general decision procedure that, for all programs, decides whether the partial function implemented by the input program has that property. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property "halt for the input 0" is undecidable. Here, "non-trivial" means that the set of partial functions that satisfy the property is neither the empty set nor the set of all partial functions. For example, "halts or fails to halt on input 0" is clearly true of all partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports "true." Also, this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, "halt on input 0 within 100 steps" isnota property of the partial function that is implemented by the program—it is a property of the program implementing the partial function and is very much decidable. Gregory Chaitinhas defined ahalting probability, represented by the symbolΩ, a type of real number that informally is said to represent theprobabilitythat a randomly produced program halts. These numbers have the sameTuring degreeas the halting problem. It is anormalandtranscendental numberwhich can bedefinedbut cannot be completelycomputed. This means one can prove that there is noalgorithmwhich produces the digits of Ω, although its first few digits can be calculated in simple cases. Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, theChurch–Turing thesislimits what can be accomplished by any machine that implementseffective methods. However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g.oracle machines). It is an open question whether there can be actual deterministicphysical processesthat, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (ahypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of thehuman brain, and whether humans can solve the halting problem.[31] Turing's proof shows that there can be no mechanical, general method (i.e., a Turing machine or a program in some equivalentmodel of computation) to determine whether algorithms halt. However, each individual instance of the halting problem has a definitive answer, which may or may not be practically computable. Given a specific algorithm and input, one can often show that it halts or does not halt, and in factcomputer scientistsoften do just that as part of acorrectness proof. There are someheuristicsthat can be used in an automated fashion to attempt to construct a proof, which frequently succeed on typical programs. This field of research is known as automatedtermination analysis. Some results have been established on the theoretical performance of halting problem heuristics, in particular the fraction of programs of a given size that may be correctly classified by a recursive algorithm. These results do not give precise numbers because the fractions are uncomputable and also highly dependent on the choice of program encoding used to determine "size". For example, consider classifying programs by their number of states and using a specific "Turing semi-infinite tape" model of computation that errors (without halting) if the program runs off the left side of the tape. Thenlimn→∞P(xhalts is decidable∣xhasnstates)=1{\displaystyle \lim _{n\to \infty }P(x\,{\text{halts is decidable}}\mid x\,{\text{has}}\,n\,{\text{states}})=1}, over programsx{\displaystyle x}chosen uniformly by number of states. But this result is in some sense "trivial" because these decidable programs are simply the ones that fall off the tape, and the heuristic is simply to predict not halting due to error. Thus a seemingly irrelevant detail, namely the treatment of programs with errors, can turn out to be the deciding factor in determining the fraction of programs.[32] To avoid these issues, several restricted notions of the "size" of a program have been developed. A denseGödel numberingassigns numbers to programs such that each computable function occurs a positive fraction in each sequence of indices from 1 to n, i.e. a Gödelization φ is dense iff for alli{\displaystyle i}, there exists ac>0{\displaystyle c>0}such thatlim infn→∞#{j∈N:0≤j<n,ϕi=ϕj}/n≥c{\displaystyle \liminf _{n\to \infty }\#\{j\in \mathbb {N} :0\leq j<n,\phi _{i}=\phi _{j}\}/n\geq c}. For example, a numbering that assigns indices2n{\displaystyle 2^{n}}to nontrivial programs and all other indices the error state is not dense, but there exists a dense Gödel numbering of syntactically correctBrainfuckprograms.[33]A dense Gödel numbering is called optimal if, for any other Gödel numberingα{\displaystyle \alpha }, there is a 1-1 total recursive functionf{\displaystyle f}and a constantc{\displaystyle c}such that for alli{\displaystyle i},αi=ϕf(i){\displaystyle \alpha _{i}=\phi _{f(i)}}andf(i)≤ci{\displaystyle f(i)\leq ci}. This condition ensures that all programs have indices not much larger than their indices in any other Gödel numbering. Optimal Gödel numberings are constructed by numbering the inputs of auniversal Turing machine.[34]A third notion of size uses universal machines operating on binary strings and measures the length of the string needed to describe the input program. A universal machineUis a machine for which every other machineVthere exists a total computable function h such thatV(x)=U(h(x)){\displaystyle V(x)=U(h(x))}. An optimal machine is a universal machine that achieves theKolmogorov complexity invariance bound, i.e. for every machineV, there existscsuch that for all outputsx, if aV-program of lengthnoutputsx, then there exists aU-program of at most lengthn+c{\displaystyle n+c}outputtingx.[35] We consider partialcomputable functions(algorithms)A{\displaystyle A}. For eachn{\displaystyle n}we consider the fractionϵn(A){\displaystyle \epsilon _{n}(A)}of errors among all programs of size metric at mostn{\displaystyle n}, counting each programx{\displaystyle x}for whichA{\displaystyle A}fails to terminate, produces a "don't know" answer, or produces a wrong answer, i.e.x{\displaystyle x}halts andA(x){\displaystyle A(x)}outputsDOES_NOT_HALT, orx{\displaystyle x}does not halt andA(x){\displaystyle A(x)}outputsHALTS. The behavior may be described as follows, for dense Gödelizations and optimal machines:[33][35] The complex nature of these bounds is due to the oscillatory behavior ofϵn(A){\displaystyle \epsilon _{n}(A)}. There are infrequently occurring new varieties of programs that come in arbitrarily large "blocks", and a constantly growing fraction of repeats. If the blocks of new varieties are fully included, the error rate is at leastϵ{\displaystyle \epsilon }, but between blocks the fraction of correctly categorized repeats can be arbitrarily high. In particular a "tally" heuristic that simply remembers the first N inputs and recognizes their equivalents allows reaching an arbitrarily low error rate infinitely often.[33] The concepts raised byGödel's incompleteness theoremsare very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that anaxiomatizationof the natural numbers that is both complete andsoundis impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove onlytruestatements about natural numbers. Since soundness impliesconsistency, this weaker form can be seen as acorollaryof the strong form. It is important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the truth value of a statement, but only concerns the issue of whether it is possible to find it through amathematical proof. The weaker form of the theorem can be proved from the undecidability of the halting problem as follows.[36]Assume that we have a sound (and hence consistent) and completeaxiomatizationof all truefirst-order logicstatements aboutnatural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithmN(n) that, given a natural numbern, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least onensuch thatN(n) yields that statement. Now suppose we want to decide if the algorithm with representationahalts on inputi. We know that this statement can be expressed with a first-order logic statement, sayH(a,i). Since the axiomatization is complete it follows that either there is annsuch thatN(n) =H(a,i) or there is ann′such thatN(n′) = ¬H(a,i). So if weiterateover allnuntil we either findH(a,i) or its negation, we will always halt, and furthermore, the answer it gives us will be true (by soundness). This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false. Many variants of the halting problem can be found in computability textbooks.[37]Typically, these problems areRE-completeand describe sets of complexityΣ10{\displaystyle \Sigma _{1}^{0}}in thearithmetical hierarchy, the same as the standard halting problem. The variants are thus undecidable, and the standard halting problemreducesto each variant and vice-versa. However, some variants have a higherdegree of unsolvabilityand cannot be reduced to the standard halting problem. The next two examples are common. Theuniversal halting problem, also known (inrecursion theory) astotality, is the problem of determining whether a given computer program willhaltfor every input(the nametotalitycomes from the equivalent question of whether the computed function istotal). This problem is not only undecidable, as the halting problem is, but highly undecidable. In terms of thearithmetical hierarchy, it isΠ20{\displaystyle \Pi _{2}^{0}}-complete.[38] This means, in particular, that it cannot be decided even with anoraclefor the halting problem. There are many programs that, for some inputs, return a correct answer to the halting problem, while for other inputs they do not return an answer at all. However the problem "given programp, is it a partial halting solver" (in the sense described) is at least as hard as the halting problem. To see this, assume that there is an algorithm PHSR ("partial halting solver recognizer") to do that. Then it can be used to solve the halting problem, as follows: To test whether input programxhalts ony, construct a programpthat on input (x,y) reportstrueand diverges on all other inputs. Then testpwith PHSR. The above argument is areductionof the halting problem to PHS recognition, and in the same manner, harder problems such ashalting on all inputscan also be reduced, implying that PHS recognition is not only undecidable, but higher in thearithmetical hierarchy, specificallyΠ20{\displaystyle \Pi _{2}^{0}}-complete. Alossy Turing machineis a Turing machine in which part of the tape may non-deterministically disappear. The halting problem is decidable for a lossy Turing machine but non-primitive recursive.[39] A machine with anoraclefor the halting problem can determine whether particular Turing machines will halt on particular inputs, but they cannot determine, in general, whether machines equivalent to themselves will halt.
https://en.wikipedia.org/wiki/Halting_problem
TheHarvard architectureis acomputer architecturewith separatestorage[1]and signal pathways forinstructionsanddata. It is often contrasted with thevon Neumann architecture, where program instructions and data share the same memory and pathways. This architecture is often used in real-time processing or low-power applications.[2][3] The term is often stated as having originated from theHarvard Mark I[4]relay-based computer, which stored instructions onpunched tape(24 bits wide) and data inelectro-mechanicalcounters. These early machines had data storage entirely contained within thecentral processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could notinitializeitself. However, in the only peer-reviewed paper on the topic published in 2022 the author states that:[5][6] Modern processors appear to the user to be systems with von Neumann architectures, with the program code stored in the samemain memoryas the data. For performance reasons, internally and largely invisible to the user, most designs have separateprocessor cachesfor the instructions and data, with separate pathways into the processor for each. This is one form of what is known as themodified Harvard architecture. Harvard architecture is historically, and traditionally, split into two address spaces, but having three, i.e. two extra (and all accessed in each cycle) is also done,[7]while rare. In a Harvard architecture, there is no need to make the two memories share characteristics. In particular, thewordwidth, timing, implementation technology, andmemory addressstructure can differ. In some systems, instructions for pre-programmed tasks can be stored inread-only memorywhile data memory generally requiresread-write memory. In some systems, there is much more instruction memory than data memory so instruction addresses are wider than data addresses. In a system with a purevon Neumann architecture, instructions and data are stored in the same memory, so instructions are fetched over the same data path used to fetch data. This means that aCPUcannot simultaneously read an instruction and read or write data from or to the memory. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time,[8]even without acache. A Harvard architecture computer can thus be faster for a given circuit complexity becauseinstruction fetchesand data access do not contend for a single memory pathway. Also, a Harvard architecture machine has distinct code and data address spaces: instruction address zero is not the same as data address zero. Instruction address zero might identify a twenty-four-bit value, while data address zero might indicate an eight-bit byte that is not part of that twenty-four-bit value. Amodified Harvard architecturemachine is very much like a Harvard architecture machine, but it relaxes the strict separation between instruction and data while still letting the CPU concurrently access two (or more) memory buses. The most common modification includes separate instruction and datacachesbacked by a common address space. While the CPU executes from cache, it acts as a pure Harvard machine. When accessing backing memory, it acts like a von Neumann machine (where code can be moved around like data, which is a powerful technique). This modification is widespread in modern processors, such as theARM architecture,Power ISAandx86processors. It is sometimes loosely called a Harvard architecture, overlooking the fact that it is actually "modified". Another modification provides a pathway between the instruction memory (such as ROM orflash memory) and the CPU to allow words from the instruction memory to be treated as read-only data. This technique is used in some microcontrollers, including theAtmel AVR. This allows constant data, such astext stringsorfunction tables, to be accessed without first having to be copied into data memory, preserving scarce (and power-hungry) data memory for read/write variables. Specialmachine languageinstructions are provided to read data from the instruction memory, or the instruction memory can be accessed using a peripheral interface.[a](This is distinct from instructions which themselves embed constant data, although for individual constants the two mechanisms can substitute for each other.) In recent years, the speed of the CPU has grown many times in comparison to the access speed of the main memory. Care needs to be taken to reduce the number of times main memory is accessed in order to maintain performance. If, for instance, every instruction run in the CPU requires an access to memory, the computer gains nothing for increased CPU speed—a problem referred to as beingmemory bound. It is possible to make extremely fast memory, but this is only practical for small amounts of memory for cost, power and signal routing reasons. The solution is to provide a small amount of very fast memory known as aCPU cachewhich holds recently accessed data. As long as the data that the CPU needs is in the cache, the performance is much higher than it is when the CPU has to get the data from the main memory. On the other side, however, it may still be limited to storing repetitive programs or data and still has a storage size limitation, and other potential problems associated with it.[b] Modern high performance CPU chip designs incorporate aspects of both Harvard and von Neumann architecture. In particular, the "split cache" version of themodified Harvard architectureis very common. CPU cache memory is divided into an instruction cache and a data cache. Harvard architecture is used as the CPU accesses the cache. In the case of a cache miss, however, the data is retrieved from the main memory, which is not formally divided into separate instruction and data sections, although it may well have separate memory controllers used for concurrent access to RAM, ROM and (NOR) flash memory. Thus, while a von Neumann architecture is visible in some contexts, such as when data and code come through the same memory controller, the hardware implementation gains the efficiencies of the Harvard architecture for cache accesses and at least some main memory accesses. In addition, CPUs often have write buffers which let CPUs proceed after writes to non-cached regions. The von Neumann nature of memory is then visible when instructions are written as data by the CPU and software must ensure that the caches (data and instruction) and write buffer are synchronized before trying to execute those just-written instructions. The principal advantage of the pure Harvard architecture—simultaneous access to more than one memory system—has been reduced by modified Harvard processors using modernCPU cachesystems. Relatively pure Harvard architecture machines are used mostly in applications where trade-offs, like the cost and power savings from omitting caches, outweigh the programming penalties from featuring distinct code and data address spaces. Even in these cases, it is common to employ special instructions in order to access program memory as though it were data for read-only tables, or for reprogramming; those processors aremodified Harvard architectureprocessors.
https://en.wikipedia.org/wiki/Harvard_architecture
Incomputer science,imperative programmingis aprogramming paradigmofsoftwarethat usesstatementsthat change a program'sstate. In much the same way that theimperative moodinnatural languagesexpresses commands, an imperative program consists ofcommandsfor thecomputerto perform. Imperative programming focuses on describinghowa program operates step by step (generally order of the steps being determined insource codeby the placement of statements one below the other),[1]rather than on high-level descriptions of its expected results. The term is often used in contrast todeclarative programming, which focuses onwhatthe program should accomplish without specifying all the details ofhowthe program should achieve the result.[2] Procedural programmingis a type of imperative programming in which the program is built from one or more procedures (also termedsubroutinesor functions). The terms are often used as synonyms, but the use of procedures has a dramatic effect on how imperative programs appear and how they are constructed. Heavy procedural programming, in whichstatechanges are localized to procedures or restricted to explicit arguments and returns from procedures, is a form ofstructured programming. Since the 1960s, structured programming andmodular programmingin general have been promoted as techniques to improve themaintainabilityand overall quality of imperative programs. The concepts behindobject-oriented programmingattempt to extend this approach. Procedural programming could be considered a step toward declarative programming. A programmer can often tell, simply by looking at the names, arguments, and return types of procedures (and related comments), what a particular procedure is supposed to do, without necessarily looking at the details of how it achieves its result. At the same time, a complete program is still imperative since itfixesthe statements to be executed and their order of execution to a large extent. The programming paradigm used to build programs for almost all computers typically follows an imperative model.[note 1]Digital computer hardware is designed to executemachine code, which is native to the computer and is usually written in the imperative style, although low-level compilers and interpreters using other paradigms exist for some architectures such aslisp machines. From this low-level perspective, the program state is defined by the contents of memory, and the statements are instructions in the native machine language of the computer. Higher-level imperative languages usevariablesand more complex statements, but still follow the same paradigm.Recipesand processchecklists, while notcomputer programs, are also familiar concepts that are similar in style to imperative programming; each step is an instruction, and the physical world holds the state. Since the basic ideas of imperative programming are both conceptually familiar and directly embodied in the hardware, most computer languages are in the imperative style. Assignment statements, in imperative paradigm, perform an operation on information located in memory and store the results in memory for later use. High-level imperative languages, in addition, permit theevaluationof complexexpressions, which may consist of a combination ofarithmetic operationsandfunctionevaluations, and the assignment of the resulting value to memory. Looping statements (as inwhile loops,do while loops, andfor loops) allow a sequence of statements to be executed multiple times. Loops can either execute the statements they contain a predefined number of times, or they can execute them repeatedly until some condition is met.Conditionalbranchingstatements allow a sequence of statements to be executed only if some condition is met. Otherwise, the statements are skipped and the execution sequence continues from the statement following them. Unconditional branching statements allow an execution sequence to be transferred to another part of a program. These include the jump (calledgotoin many languages),switch, and the subprogram,subroutine, or procedure call (which usually returns to the next statement after the call). Early in the development ofhigh-level programming languages, the introduction of theblockenabled the construction of programs in which a group of statements and declarations could be treated as if they were one statement. This, alongside the introduction ofsubroutines, enabled complex structures to be expressed by hierarchical decomposition into simpler procedural structures. Many imperative programming languages (such asFortran,BASIC, andC) areabstractionsofassembly language.[3] The earliest imperative languages were the machine languages of the original computers. In these languages, instructions were very simple, which made hardware implementation easier but hindered the creation of complex programs.FORTRAN, developed byJohn BackusatInternational Business Machines(IBM) starting in 1954, was the first major programming language to remove the obstacles presented by machine code in the creation of complex programs. FORTRAN was acompiled languagethat allowed named variables, complex expressions, subprograms, and many other features now common in imperative languages. The next two decades saw the development of many other major high-level imperative programming languages. In the late 1950s and 1960s,ALGOLwas developed in order to allow mathematical algorithms to be more easily expressed and even served as theoperating system's target language for some computers.MUMPS(1966) carried the imperative paradigm to a logical extreme, by not having any statements at all, relying purely on commands, even to the extent of making the IF and ELSE commands independent of each other, connected only by an intrinsic variable named $TEST.COBOL(1960) andBASIC(1964) were both attempts to make programming syntax look more like English. In the 1970s,Pascalwas developed byNiklaus Wirth, andCwas created byDennis Ritchiewhile he was working atBell Laboratories. Wirth went on to designModula-2andOberon. For the needs of theUnited States Department of Defense,Jean Ichbiahand a team atHoneywellbegan designingAdain 1978, after a 4-year project to define the requirements for the language. The specification was first published in 1983, with revisions in 1995, 2005, and 2012. The 1980s saw a rapid growth in interest inobject-oriented programming. These languages were imperative in style, but added features to supportobjects. The last two decades of the 20th century saw the development of many such languages.Smalltalk-80, originally conceived byAlan Kayin 1969, was released in 1980, by the Xerox Palo Alto Research Center (PARC). Drawing from concepts in another object-oriented language—Simula(which is considered the world's firstobject-oriented programming language, developed in the 1960s)—Bjarne StroustrupdesignedC++, an object-oriented language based onC. Design ofC++began in 1979 and the first implementation was completed in 1983. In the late 1980s and 1990s, the notable imperative languages drawing on object-oriented concepts werePerl, released byLarry Wallin 1987;Python, released byGuido van Rossumin 1990;Visual BasicandVisual C++(which includedMicrosoft Foundation Class Library(MFC) 2.0), released byMicrosoftin 1991 and 1993 respectively;PHP, released byRasmus Lerdorfin 1994;Java, byJames Gosling(Sun Microsystems) in 1995,JavaScript, byBrendan Eich(Netscape), andRuby, by Yukihiro "Matz" Matsumoto, both released in 1995. Microsoft's.NET Framework(2002) is imperative at its core, as are its main target languages,VB.NETandC#that run on it; however Microsoft'sF#, a functional language, also runs on it. FORTRAN(1958) was unveiled as "The IBM Mathematical FORmula TRANslating system." It was designed for scientific calculations, withoutstringhandling facilities. Along withdeclarations,expressions, andstatements, it supported: It succeeded because: However, non IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler.[4]TheAmerican National Standards Institute(ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports: COBOL(1959) stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced.[5]TheUS Department of Defenseinfluenced COBOL's development, withGrace Hopperbeing a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.[6] COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, likeobject-oriented programming.[6] ALGOL(1960) stands for "ALGOrithmic Language." It had a profound influence on programming language design.[7]Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable structured design. Algol was first to define itssyntaxusing theBackus–Naur form.[7]This led tosyntax-directedcompilers. It added features like: Algol's direct descendants includePascal,Modula-2,Ada,DelphiandOberonon one branch. On another branch there'sC,C++andJava.[7] BASIC(1964) stands for "Beginner's All Purpose Symbolic Instruction Code." It was developed atDartmouth Collegefor all of their students to learn.[8]If a student did not go on to a more powerful language, the student would still remember Basic.[8]A Basic interpreter was installed in themicrocomputersmanufactured in the late 1970s. As the microcomputer industry grew, so did the language.[8] Basic pioneered theinteractive session.[8]It offeredoperating systemcommands within its environment: However, the Basic syntax was too simple for large programs.[8]Recent dialects added structure and object-oriented extensions.Microsoft'sVisual Basicis still widely used and produces agraphical user interface.[9] C programming language(1973) got its name because the languageBCPLwas replaced withB, andAT&T Bell Labscalled the next version "C." Its purpose was to write theUNIXoperating system.[10]C is a relatively small language -- making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s.[10]Its growth also was because it has the facilities ofassembly language, but uses ahigh-level syntax. It added advanced features like: Callows the programmer to control in which region of memory data is to be stored.Global variablesandstatic variablesrequire the fewestclock cyclesto store. Thestackis automatically used for the standard variabledeclarations.Heapmemory is returned to apointer variablefrom themalloc()function. In the 1970s,software engineersneeded language support to break large projects down intomodules.[18]One obvious feature was to decompose large projectsphysicallyinto separatefiles. A less obvious feature was to decompose large projectslogicallyintoabstractdatatypes.[18]At the time, languages supported concrete (scalar) datatypes likeintegernumbers,floating-pointnumbers, andstringsofcharacters. Concrete datatypes have their representation as part of their name.[19]Abstract datatypes arestructuresof concrete datatypes — with a new name assigned. For example, alistof integers could be calledinteger_list. In object-oriented jargon, abstract datatypes are calledclasses. However, aclassis only a definition; no memory is allocated. When memory is allocated to a class, it's called anobject.[20] Object-oriented imperative languagesdeveloped by combining the need for classes and the need for safefunctional programming.[21]A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as amethod,member function, oroperation.Object-oriented programmingis executingoperationsonobjects.[22] Object-oriented languagessupport a syntax to modelsubset/supersetrelationships. Inset theory, anelementof a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other persons don't have.Object-oriented languagesmodelsubset/supersetrelationships usinginheritance.[23]Object-oriented programmingbecame the dominant language paradigm by the late 1990s.[18] C++(1985) was originally called "C with Classes."[24]It was designed to expandC'scapabilities by adding the object-oriented facilities of the languageSimula.[25] An object-oriented module is composed of two files. The definitions file is called theheader file. Here is a C++header filefor theGRADE classin a simple school application: Aconstructoroperation is a function with the same name as the class name.[26]It is executed when the calling operation executes thenewstatement. A module's other file is thesource file. Here is a C++ source file for theGRADE classin a simple school application: Here is a C++header filefor thePERSON classin a simple school application: Here is a C++source filefor thePERSON classin a simple school application: Here is a C++header filefor theSTUDENT classin a simple school application: Here is a C++source filefor theSTUDENT classin a simple school application: Here is a driver program for demonstration: Here is amakefileto compile everything:
https://en.wikipedia.org/wiki/Imperative_programming
Langton's antis a two-dimensionalTuring machinewith a very simple set of rules but complexemergentbehavior. It was invented byChris Langtonin 1986 and runs on asquare latticeof black and white cells.[1]The idea has been generalized in several different ways, such asturmiteswhich add more colors and more states. Squares on a plane are colored variously either black or white. We arbitrarily identify one square as the "ant". The ant can travel in any of the four cardinal directions at each step it takes. The "ant" moves according to the rules below: Langton's ant can also be described as acellular automaton, where the grid is colored black or white and the "ant" square has one of eight different colors assigned to encode the combination of black/white state and the current direction of motion of the ant.[2] These simple rules lead to complex behavior. Three distinct modes of behavior are apparent,[3]when starting on a completely white grid. Allfiniteinitial configurations tested eventually converge to the same repetitive pattern, suggesting that the "highway" is anattractorof Langton's ant, but no one has been able to prove that this is true for all such initial configurations. It is only known that the ant's trajectory is always unbounded regardless of the initial configuration[4]– this result was incorrectly attributed and is known as theCohen-Kong theorem.[5] In 2000, Gajardo et al. showed a construction that calculates anyboolean circuitusing the trajectory of a single instance of Langton's ant.[2] Greg TurkandJim Proppconsidered a simple extension to Langton's ant where instead of just two colors, more colors are used.[6]The colors are modified in a cyclic fashion. A simple naming scheme is used: for each of the successive colors, a letter "L" or "R" is used to indicate whether a left or right turn should be taken. Langton's ant has the name "RL" in this naming scheme. Some of these extended Langton's ants produce patterns that becomesymmetricover and over again. One of the simplest examples is the ant "RLLR". One sufficient condition for this to happen is that the ant's name, seen as a cyclic list, consists of consecutive pairs of identical letters "LL" or "RR". The proof involvesTruchet tiles. The hexagonal grid permits up to six different rotations, which are notated here as N (no change), R1(60° clockwise), R2(120° clockwise), U (180°), L2(120° counter-clockwise), L1(60° counter-clockwise). A further extension of Langton's ants is to consider multiple states of the Turing machine – as if the ant itself has a color that can change. These ants are calledturmites, a contraction of "Turing machinetermites". Common behaviours include the production of highways, chaotic growth and spiral growth.[7] Multiple Langton's ants can co-exist on the 2D plane, and their interactions give rise to complex, higher-order automata that collectively build a wide variety of organized structures. There are different ways of modelling their interaction and the results of the simulation may strongly depend on the choices made.[8] Multiple turmites can co-exist on the 2D plane as long as there is a rule that defines what happens when they meet.Ed Pegg, Jr.considered ants that can turn for examplebothleft and right, splitting in two and annihilating each other when they meet.[9]
https://en.wikipedia.org/wiki/Langton%27s_ant
Incomputer science, aturmiteis aTuring machinewhich has an orientation in addition to a current state and a "tape" that consists of an infinite two-dimensional grid of cells. The termsantandvantare also used.Langton's antis a well-known type of turmite defined on the cells of a square grid.Paterson's wormsare a type of turmite defined on the edges of anisometric grid. It has been shown that turmites in general are exactly equivalent in power to one-dimensional Turing machines with an infinite tape, as either can simulate the other. Langton's antswere invented in 1986 and declared "equivalent to Turing machines".[1]Independently, in 1988, Allen H. Brady considered the idea of two-dimensional Turing machines with an orientation and called them "TurNing machines".[2][3] Apparently independently of both of these,[4]Greg Turkinvestigated the same kind of system and wrote toA. K. Dewdneyabout them. A. K. Dewdney named them "tur-mites" in his "Computer Recreations" column inScientific Americanin 1989.[5]Rudy Ruckerrelates the story as follows: Dewdney reports that, casting about for a name for Turk's creatures, he thought, "Well, they're Turing machines studied by Turk, so they should be tur-something. And they're like little insects, or mites, so I'll call them tur-mites! And that sounds like termites!" With the kind permission of Turk and Dewdney, I'm going to leave out the hyphen, and call them turmites. Turmites can be categorised as being eitherrelativeorabsolute. Relative turmites, alternatively known as "turning machines", have an internal orientation.Langton's antis such an example. Relative turmites are, by definition,isotropic; rotating the turmite does not affect its outcome. Relative turmites are so named because the directions are encodedrelativeto the current orientation, equivalent to using the words "left" or "backwards". Absolute turmites, by comparison, encode their directions in absolute terms: a particular instruction may direct the turmite to move "north". Absolute turmites are two-dimensional analogues of conventional Turing machines, so are occasionally referred to as simply "two-dimensional Turing machines". The remainder of this article is concerned with the relative case. The following specification is specific to turmites on a two-dimensional square grid, the most studied type of turmite. Turmites on other grids can be specified in a similar fashion. As with Langton's ant, turmites perform the following operations each timestep: As with Turing machines, the actions are specified by astate transition tablelisting the current internal state of the turmite and the color of the cell it is currently standing on. For example, the turmite shown in the image at the top of this page is specified by the following table: The direction to turn is one ofL(90° left),R(90° right),N(no turn) andU(180°U-turn). Starting from an empty grid or other configurations, the most commonly observed behaviours are chaotic growth, spiral growth and 'highway' construction. Rare examples become periodic after a certain number of steps. Allen H. Brady searched for terminating turmites (the equivalent ofbusy beavers) and found a 2-state 2-color machine that printed 37 1's before halting, and another that took 121 steps before halting.[3]He also considered turmites that move on atriangular grid, finding several busy beavers here too. Ed Pegg, Jr.considered another approach to the busy beaver game. He suggested turmites that can turn for examplebothleft and right, splitting in two. Turmites that later meet annihilate each other. In this system, a Busy Beaver is one that from a starting pattern of a single turmite lasts the longest before all the turmites annihilate each other.[6] Following Allen H. Brady's initial work of turmites on a triangular grid,hexagonal tilingshave also been explored. Much of this work is due to Tim Hutton, and his results are on the Rule Table Repository. He has also considered Turmites in three dimensions, and collected some preliminary results. Allen H. Brady and Tim Hutton have also investigated one-dimensional relative turmites on theinteger lattice, which Brady termedflippers. (One-dimensionalabsoluteturmites are of course simply known as Turing machines.)
https://en.wikipedia.org/wiki/Turmite
Alan Turing(1912–1954), a pioneer computer scientist, mathematician, and philosopher, is theeponymof all of the things listed below.
https://en.wikipedia.org/wiki/List_of_things_named_after_Alan_Turing
Amodified Harvard architectureis a variation of theHarvard computer architecturethat, unlike the pure Harvard architecture, allows memory that contains instructions to be accessed as data. Most modern computers that are documented as Harvard architecture are, in fact, modified Harvard architecture. The original Harvard architecture computer, theHarvard Mark I, employed entirely separate memory systems to store instructions and data. TheCPUfetched the next instruction and loaded or stored data simultaneously[1]and independently. This is in contrast to avon Neumann architecturecomputer, in which both instructions and data are stored in the same memory system and (without the complexity of aCPU cache) must be accessed in turn. The physical separation of instruction and data memory is sometimes held to be the distinguishing feature of modern Harvard architecture computers. Withmicrocontrollers(entire computer systems integrated onto single chips), the use of different memory technologies for instructions (e.g.flash memory) and data (typicallyread/write memory) in von Neumann machines is becoming popular. The true distinction of a Harvard machine is that instruction and data memory occupy differentaddress spaces. In other words, a memory address does not uniquely identify a storage location (as it does in a von Neumann machine); it is also necessary to know the memory space (instruction or data) to which the address belongs. A computer with a von Neumann architecture has the advantage over Harvard machines as described above in that code can also be accessed and treated the same as data, and vice versa. This allows, for example, data to be read fromdisk storageinto memory and then executed as code, or self-optimizing software systems using technologies such asjust-in-time compilationto write machine code into their own memory and then later execute it. Another example isself-modifying code, which allows a program to modify itself. A disadvantage of these methods are issues withexecutable space protection, which increase the risks frommalwareand software defects. Accordingly, some pure Harvard machines are specialty products. Most modern computers instead implement amodifiedHarvard architecture. Those modifications are various ways to loosen the strict separation between code and data, while still supporting the higher performance concurrent data and instruction access of the Harvard architecture. The most common modification builds amemory hierarchywith separateCPU cachesfor instructions and data at lower levels of the hierarchy. There is a single address space for instructions and data, providing the von Neumann model, but the CPU fetches instructions from the instruction cache and fetches data from the data cache.[citation needed]Most programmers never need to be aware of the fact that the processor core implements a (modified) Harvard architecture, although they benefit from its speed advantages. Only programmers who generate and store instructions into memory need to be aware of issues such ascache coherency, if the store doesn't modify or invalidate a cached copy of the instruction in an instruction cache. Another change preserves the "separate address space" nature of a Harvard machine, but provides special machine operations to access the contents of the instruction memory as data. Because data is not directly executable as instructions, such machines are not always viewed as "modified" Harvard architecture: A few Harvard architecture processors, such as theMaxim IntegratedMAXQ, can execute instructions fetched from any memory segment – unlike the original Harvard processor, which can only execute instructions fetched from the program memory segment. Such processors, like other Harvard architecture processors – and unlike pure von Neumann architecture – can read an instruction and read a data value simultaneously, if they're in separate memory segments, since the processor has (at least) two separate memory segments with independent data buses. The most obvious programmer-visible difference between this kind of modified Harvard architecture and a pure von Neumann architecture is that – when executing an instruction from one memory segment – the same memory segment cannot be simultaneously accessed as data.[3][4] Three characteristics may be used to distinguish modified Harvard machines from pure Harvard and von Neumann machines: For pure Harvard machines, there is an address "zero" in instruction space that refers to an instruction storage location and a separate address "zero" in data space that refers to a distinct data storage location. By contrast, von Neumann and split-cache modified Harvard machines store both instructions and data in a single address space, so address "zero" refers to only one location and whether the binary pattern in that location is interpreted as an instruction or data is defined by how the program is written. However, just like pure Harvard machines, instruction-memory-as-data modified Harvard machines have separate address spaces, so have separate addresses "zero" for instruction and data space, so this does not distinguish that type of modified Harvard machines from pure Harvard machines. This is the point of pure or modified Harvard machines, and why they co-exist with the more flexible and general von Neumann architecture: separate memory pathways to the CPU allow instructions to be fetched and data to be accessed at the same time, improving throughput. The pure Harvard machines have separate pathways with separate address spaces. Split-cache modified Harvard machines have such separate access paths for CPU caches or other tightly coupled memories, but a unified access path covers the rest of thememory hierarchy. A von Neumann processor has only that unified access path. From a programmer's point of view, a modified Harvard processor in which instruction and data memories share an address space is usually treated as a von Neumann machine until cache coherency becomes an issue, as withself-modifying codeand program loading. This can be confusing, but such issues are usually visible only tosystems programmersandintegrators.[clarification needed]Other modified Harvard machines are like pure Harvard machines in this regard. The original Harvard machine, theMark I, stored instructions on apunched paper tapeand data in electro-mechanical counters. This, however, was entirely due to the limitations of technology available at the time. Today a Harvard machine such as thePIC microcontrollermight use 12-bit wideflash memoryfor instructions, and 8-bit wideSRAMfor data. In contrast, a von Neumann microcontroller such as anARM7TDMI, or a modified HarvardARM9core, necessarily provides uniform access to flash memory and SRAM (as 8 bit bytes, in those cases). Outside of applications where a cachelessDSPormicrocontrolleris required, most modern processors have aCPU cachewhich partitions instruction and data. There are also processors which are Harvard machines by the most rigorous definition (that program and data memory occupy different address spaces), and are onlymodifiedin the weak sense that there are operations to read and/or write program memory as data. For example, LPM (Load Program Memory) and SPM (Store Program Memory) instructions in theAtmel AVRimplement such a modification. Similar solutions are found in other microcontrollers such as thePICandZ8Encore!, many families of digital signal processors such as theTI C55x cores, and more. Because instruction execution is still restricted to the program address space, these processors are very unlike von Neumann machines. External wiring can also convert a strictly Harvard CPU core into a modified Harvard one, for example by simply combining `PSEN#` (program space read) and `RD#` (external data space read) signals externally through an AND gate on anIntel 8051family microcontroller, the microcontroller are said to be "von Neumann connected," as the external data and program address spaces become unified. Having separate address spaces creates certain difficulties in programming with high-level languages that do not directly support the notion that tables of read-only data might be in a different address space from normal writable data (and thus need to be read using different instructions). TheC programming languagecan support multiple address spaces either through non-standard extensions[a]or through the now standardizedextensions to support embedded processors.
https://en.wikipedia.org/wiki/Modified_Harvard_architecture
Claude Elwood Shannon(April 30, 1916 – February 24, 2001) was an Americanmathematician,electrical engineer,computer scientist,cryptographerand inventor, known as the "father ofinformation theory" and credited with laying the foundations of theInformation Age.[1][2][3]Shannon was the first to describe the use ofBoolean algebrathat are essential to alldigital electroniccircuits, and was one of the founding fathers ofartificial intelligence.[4][5][6]RoboticistRodney Brooksdeclared that Shannon was the 20th century engineer who contributed the most to 21st century technologies,[7]and mathematicianSolomon W. Golombdescribed his intellectual achievement as "one of the greatest of the twentieth century".[8] At theUniversity of Michigan, Shannondual degreed, graduating with a Bachelor of Science in both electrical engineering and mathematics in 1936. A 21-year-oldmaster's degreestudent in electrical engineering atMIT, his thesis "A Symbolic Analysis of Relay and Switching Circuits" demonstrated that electrical applications ofBoolean algebracould construct any logical numerical relationship,[9]thereby establishing the theory behinddigital computinganddigital circuits.[10]The thesis has been claimed to be the most important master's thesis of all time,[9]having been called the "birth certificate of the digital revolution",[11]and winning the1939 Alfred Noble Prize.[12]He graduated from MIT in 1940 with a PhD in mathematics;[13]his thesis focusing ongeneticscontained important results, while initially going unpublished.[14] Shannon contributed to the field ofcryptanalysisfor national defense of the United States duringWorld War II, including his fundamental work on codebreaking and securetelecommunications, writing apaperwhich is considered one of the foundational pieces of modern cryptography,[15]with his work described as "a turning point, and marked the closure of classical cryptography and the beginning of modern cryptography".[16]The work of Shannon was foundational forsymmetric-key cryptography, including the work ofHorst Feistel, theData Encryption Standard(DES), and theAdvanced Encryption Standard(AES).[16]As a result, Shannon has been called the "founding father of modern cryptography".[17] His 1948 paper "A Mathematical Theory of Communication" laid the foundations for the field of information theory,[18][13]referred to as a "blueprint for the digital era" by electrical engineerRobert G. Gallager[19]and "theMagna Cartaof the Information Age" byScientific American.[20][21]Golomb compared Shannon's influence on the digital age to that which "the inventor of the alphabet has had on literature".[18]Advancements across multiple scientific disciplines utilized Shannon's theory—including the invention of thecompact disc, the development of theInternet, the commercialization of mobile telephony, and the understanding ofblack holes.[22][23]He also formally introduced the term "bit",[24][2]and was a co-inventor of bothpulse-code modulationand the firstwearable computer. Shannon made numerous contributions to the field of artificial intelligence,[4]including co-organizing the 1956Dartmouth workshopconsidered to be the discipline's founding event,[25][26]and papers on the programming of chess computers.[27][28]His Theseus machine was the first electrical device to learn by trial and error, being one of the first examples of artificial intelligence.[7][29] The Shannon family lived inGaylord, Michigan, and Claude was born in a hospital that was nearbyPetoskey.[5]His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge ofprobateinGaylord. His mother, Mabel Wolf Shannon (1880–1945), was a language teacher, who also served as the principal ofGaylord High School.[30]Claude Sr. was a descendant ofNew Jersey settlers, while Mabel was a child of German immigrants.[5]Shannon's family was active in their Methodist Church during his youth.[31] Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics. At home, he constructed such devices as models of planes, a radio-controlled model boat and a barbed-wiretelegraphsystem to a friend's house a half-mile away.[32]While growing up, he also worked as a messenger for theWestern Unioncompany. Shannon's childhood hero wasThomas Edison, who he later learned was a distant cousin. Both Shannon and Edison were descendants ofJohn Ogden(1609–1682), a colonial leader and an ancestor of many distinguished people.[33][34] In 1932, Shannon entered theUniversity of Michigan, where he was introduced to the work ofGeorge Boole. He graduated in 1936 with twobachelor's degrees: one inelectrical engineeringand the other in mathematics. In 1936, Shannon began his graduate studies inelectrical engineeringat theMassachusetts Institute of Technology(MIT), where he worked onVannevar Bush'sdifferential analyzer, which was an earlyanalog computerthat was composed of electromechanical parts and could solvedifferential equations.[35]While studying the complicatedad hoccircuits of this analyzer, Shannon designedswitching circuitsbased onBoole's concepts. In 1937, he wrote hismaster's degreethesis,A Symbolic Analysis of Relay and Switching Circuits,[36]with a paper from this thesis published in 1938.[36]A revolutionary work forswitching circuit theory, Shannon diagramed switching circuits that could implement the essential operators ofBoolean algebra. Then he proved that his switching circuits could be used to simplify the arrangement of theelectromechanicalrelaysthat were used during that time intelephone call routing switches. Next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presented diagrams of several circuits, including a digital 4-bit full adder.[36]His work differed significantly from the work of previous engineers such asAkira Nakashima, who still relied on the existent circuit theory of the time and took a grounded approach.[37]Shannon's idea were more abstract and relied on mathematics, thereby breaking new ground with his work, with his approach dominating modern-day electrical engineering.[37] Using electrical switches to implement logic is the fundamental concept that underlies allelectronic digital computers. Shannon's work became the foundation ofdigital circuitdesign, as it became widely known in the electrical engineering community during and afterWorld War II. The theoretical rigor of Shannon's work superseded thead hocmethods that had prevailed previously.Howard Gardnerhailed Shannon's thesis "possibly the most important, and also the most famous, master's thesis of the century."[38]Herman Goldstinedescribed it as "surely ... one of the most important master's theses ever written ... It helped to change digital circuit design from an art to a science."[39]One of the reviewers of his work commented that "To the best of my knowledge, this is the first application of the methods of symbolic logic to so practical an engineering problem. From the point of view of originality I rate the paper as outstanding."[40]Shannon's master's thesis won the1939 Alfred Noble Prize. Shannon received his PhD in mathematics from MIT in 1940.[33]Vannevar Bush had suggested that Shannon should work on his dissertation at theCold Spring Harbor Laboratory, in order to develop a mathematical formulation forMendeliangenetics. This research resulted in Shannon's PhD thesis, calledAn Algebra for Theoretical Genetics.[41]However, the thesis went unpublished after Shannon lost interest, but it did contain important results.[14]Notably, he was one of the first to apply an algebraic framework to study theoretical population genetics.[42]In addition, Shannon devised a general expression for the distribution of several linked traits in a population after multiple generations under a random mating system, which was original at the time,[43]with the new theorem unworked out by otherpopulation geneticistsof the time.[44] In 1940, Shannon became a National Research Fellow at theInstitute for Advanced StudyinPrinceton, New Jersey. In Princeton, Shannon had the opportunity to discuss his ideas with influential scientists andmathematicianssuch asHermann WeylandJohn von Neumann, and he also had occasional encounters withAlbert EinsteinandKurt Gödel. Shannon worked freely across disciplines, and this ability may have contributed to his later development of mathematical information theory.[45] Shannon had worked atBell Labsfor a few months in the summer of 1937,[46]and returned there to work onfire-control systemsandcryptographyduringWorld War II, under a contract with section D-2 (Control Systems section) of theNational Defense Research Committee(NDRC). Shannon is credited with the invention ofsignal-flow graphs, in 1942. He discovered the topological gain formula while investigating the functional operation of an analog computer.[47] For two months early in 1943, Shannon came into contact with the leading British mathematicianAlan Turing. Turing had been posted to Washington to share with theU.S. Navy's cryptanalytic service the methods used by theGovernment Code and Cypher SchoolatBletchley Parkto break the cyphers used by theKriegsmarineU-boatsin the northAtlantic Ocean.[48]He was also interested in the encipherment of speech and to this end spent time at Bell Labs. Shannon and Turing met at teatime in the cafeteria.[48]Turing showed Shannon his 1936 paper that defined what is now known as the "universal Turing machine".[49][50]This impressed Shannon, as many of its ideas complemented his own. Shannon and his team developed anti-aircraft systems that tracked enemy missiles and planes, while also determining the paths for intercepting missiles.[51] In 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titledData Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon,Ralph Beebe Blackman, andHendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with "the problem of separating a signal from interfering noise in communications systems."[52]In other words, it modeled the problem in terms ofdataandsignal processingand thus heralded the coming of theInformation Age. Shannon's work on cryptography was even more closely related to his later publications oncommunication theory.[53]At the close of the war, he prepared a classified memorandum forBell Telephone Labsentitled "A Mathematical Theory of Cryptography", dated September 1945. A declassified version of this paper was published in 1949 as "Communication Theory of Secrecy Systems" in theBell System Technical Journal. This paper incorporated many of the concepts and mathematical formulations that also appeared in hisA Mathematical Theory of Communication. Shannon said that his wartime insights into communication theory and cryptography developed simultaneously, and that "they were so close together you couldn't separate them".[54]In a footnote near the beginning of the classified report, Shannon announced his intention to "develop these results … in a forthcoming memorandum on the transmission of information."[55] While he was at Bell Labs, Shannon proved that thecryptographicone-time padis unbreakable in his classified research that was later published in 1949. The same article also proved that any unbreakable system must have essentially the same characteristics as the one-time pad: the key must be truly random, as large as the plaintext, never reused in whole or part, and kept secret.[56] In 1948, the promised memorandum appeared as "A Mathematical Theory of Communication", an article in two parts in the July and October issues of theBell System Technical Journal. This work focuses on the problem of how best to encode the message a sender wants to transmit. Shannon developedinformation entropyas a measure of theinformationcontent in a message, which is a measure of uncertainty reduced by the message. In so doing, he essentially invented the field ofinformation theory. The bookThe Mathematical Theory of Communication[57]reprints Shannon's 1948 article andWarren Weaver's popularization of it, which is accessible to the non-specialist. Weaver pointed out that the word "information" in communication theory is not related to what you do say, but to what you could say. That is, information is a measure of one's freedom of choice when one selects a message. Shannon's concepts were also popularized, subject to his own proofreading, inJohn Robinson Pierce'sSymbols, Signals, and Noise. Information theory's fundamental contribution tonatural language processingandcomputational linguisticswas further established in 1951, in his article "Prediction and Entropy of Printed English", showing upper and lower bounds of entropy on the statistics of English – giving a statistical foundation to language analysis. In addition, he proved that treatingspaceas the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition. Another notable paper published in 1949 is "Communication Theory of Secrecy Systems", a declassified version of his wartime work on the mathematical theory of cryptography, in which he proved that all theoretically unbreakable cyphers must have the same requirements as the one-time pad. He is credited with the introduction ofsampling theorem, which he had derived as early as 1940,[58]and which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later. He further wrote a paper in 1956 regarding coding for a noisy channel, which also became a classic paper in the field of information theory.[59]However, also in 1956 he wrote a one-page editorial for the "IRE Transactions on Information Theory" entitled "The Bandwagon" which he began by observing: "Information theory has, in the last few years, become something of a scientific bandwagon" and which he concluded by warning: "Only by maintaining a thoroughly scientific attitude can we achieve real progress in communication theory and consolidate our present position."[60] Claude Shannon's influence has been immense in the field, for example, in a 1973 collection of the key papers in the field of information theory, he was author or coauthor of 12 of the 49 papers cited, while no one else appeared more than three times.[61]Even beyond his original paper in 1948, he is still regarded as the most important post-1948 contributor to the theory.[61] In May 1951,Mervin Kellyreceived a request from the director of theCIA, generalWalter Bedell Smith, regarding Shannon and the need for him, as Shannon was regarded as, based on "the best authority", the "most eminently qualified scientist in the particular field concerned".[62]As a result of the request, Shannon became part of the CIA's Special Cryptologic Advisory Group or SCAG.[62] In his time at Bell Labs, he also co-developedpulse-code modulationalongsideBernard M. Oliver, andJohn R. Pierce.[63][64] In 1950, Shannon designed and built, with the help of his wife, a learning machine named Theseus. It consisted of a maze on a surface, through which a mechanical mouse could move. Below the surface were sensors that followed the path of a mechanical mouse through the maze. After much trial and error, this device would learn the shortest path through the maze, and direct the mechanical mouse through the maze. The pattern of the maze could be changed at will.[29] Mazin Gilbertstated that Theseus "inspired the whole field of AI. This random trial and error is the foundation of artificial intelligence."[29] Shannon wrote multiple influential papers on artificial intelligence, such as his 1950 paper titled "Programming a Computer for Playing Chess", and his 1953 paper titled "Computers and Automata".[65]AlongsideJohn McCarthy, he co-edited a book titledAutomata Studies, which was published in 1956.[59]The categories in the articles within the volume were influenced by Shannon's own subject headings in his 1953 paper.[59]Shannon shared McCarthy's goal of creating a science of intelligent machines, but also held a broader view of viable approaches in automata studies, such as neural nets, Turing machines, cybernetic mechanisms, and symbolic processing by computer.[59] Shannon co-organized and participated in theDartmouth workshopof 1956, alongside John McCarthy,Marvin MinskyandNathaniel Rochester, and which is considered the founding event of the field of artificial intelligence.[66][26] In 1956 Shannon joined the MIT faculty, holding an endowed chair. He worked in the Research Laboratory of Electronics (RLE). He continued to serve on the MIT faculty until 1978. Shannon developedAlzheimer's diseaseand spent the last few years of his life in anursing home; he died in 2001, survived by his wife, a son and daughter, and two granddaughters.[67][68] Outside of Shannon's academic pursuits, he was interested injuggling,unicycling, andchess. He also invented many devices, including aRoman numeralcomputer called THROBAC, andjuggling machines.[69][70]He built a device that could solve theRubik's Cubepuzzle.[33] Shannon also invented flame-throwingtrumpets, rocket-poweredfrisbees, and plastic foamshoesfor navigating a lake, and which to an observer, would appear as if Shannon was walking on water.[71] Shannon designed theMinivac 601, adigital computertrainer to teach business people about how computers functioned. It was sold by theScientific Development Corpstarting in 1961.[72] He is further considered the co-inventor of the firstwearable computeralong withEdward O. Thorp.[73]The device was used to improve the odds when playingroulette. Shannon marriedNorma Levor, a wealthy, Jewish, left-wing intellectual in January 1940. The marriage ended in divorce a year later. Levor later marriedBen Barzman.[74] Shannon met his second wife,Mary Elizabeth Moore(Betty), when she was a numerical analyst at Bell Labs. They were married in 1949.[67]Betty assisted Claude in building some of his most famous inventions.[75]They had three children.[76] Shannon presented himself asapoliticaland anatheist.[77] There are six statues of Shannon sculpted byEugene Daub: one at theUniversity of Michigan; one at MIT in theLaboratory for Information and Decision Systems; one in Gaylord, Michigan; one at theUniversity of California, San Diego; one at Bell Labs; and another atAT&T Shannon Labs.[78]The statue in Gaylord is located in the Claude Shannon Memorial Park.[79]After thebreakup of the Bell System, the part of Bell Labs that remained withAT&T Corporationwas named Shannon Labs in his honor. In June 1954, Shannon was listed as one of the top 20 most important scientists in America byFortune.[80]In 2013, information theory was listed as one of the top 10 revolutionary scientific theories byScience News.[81] According toNeil Sloane, anAT&T Fellowwho co-edited Shannon's large collection of papers in 1993, the perspective introduced by Shannon's communication theory (now called "information theory") is the foundation of thedigital revolution, and every device containing amicroprocessorormicrocontrolleris a conceptual descendant of Shannon's publication in 1948:[82]"He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him."[83]Thecryptocurrencyunitshannon(a synonym for gwei) is named after him.[84] Shannon is credited by many as single-handedly creating information theory and for laying the foundations for theDigital Age.[85][86][14][87][88][2] His achievements are considered to be on par with those ofAlbert Einstein,Sir Isaac Newton, andCharles Darwin.[1][18][6][89] The artificial intelligencelarge language modelfamilyClaude (language model)was named in Shannon's honor.[90] A Mind at Play, a biography of Shannon written byJimmy Soniand Rob Goodman, was published in 2017.[91]They described Shannon as "the most important genius you’ve never heard of, a man whose intellect was on par withAlbert EinsteinandIsaac Newton".[92]Consultant and writer Tom Rutledge, writing forBoston Review, stated that "Of the computer pioneers who drove the mid-20th-century information technology revolution—an elite men’s club of scholar-engineers who also helped crack Nazi codes and pinpoint missile trajectories—Shannon may have been the most brilliant of them all."[89]Electrical engineerRobert Gallagerstated about Shannon that "He had this amazing clarity of vision. Einstein had it, too – this ability to take on a complicated problem and find the right way to look at it, so that things become very simple."[19]In an obituary by Neil Sloane andRobert Calderbank, they stated that "Shannon must rank near the top of the list of major figures of twentieth century science".[93]Due to his work in multiple fields, Shannon is also regarded as apolymath.[94][95] HistorianJames Gleicknoted the importance of Shannon, stating that "Einstein looms large, and rightly so. But we’re not living in the relativity age, we’re living in the information age. It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten."[3]Gleick further noted that "he created a whole field from scratch, from the brow ofZeus".[3] On April 30, 2016, Shannon was honored with aGoogle Doodleto celebrate his life on what would have been his 100th birthday.[96][97][98][99] The Bit Player, a feature film about Shannon directed byMark Levinsonpremiered at theWorld Science Festivalin 2019.[100]Drawn from interviews conducted with Shannon in his house in the 1980s, the film was released on Amazon Prime in August 2020. Shannon'sThe Mathematical Theory of Communication,[57]begins with an interpretation of his own work byWarren Weaver. Although Shannon's entire work is about communication itself, Warren Weaver communicated his ideas in such a way that those not acclimated to complex theory and mathematics could comprehend the fundamental laws he put forth. The coupling of their unique communicational abilities and ideas generated theShannon-Weaver model, although the mathematical and theoretical underpinnings emanate entirely from Shannon's work after Weaver's introduction. For the layman, Weaver's introduction better communicatesThe Mathematical Theory of Communication,[57]but Shannon's subsequent logic, mathematics, and expressive precision was responsible for defining the problem itself. "Theseus", created in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around alabyrinthof 25 squares.[101]The maze configuration was flexible and it could be modified arbitrarily by rearranging movable partitions.[101]The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior.[101]Shannon's mouse appears to have been the first artificial learning device of its kind.[101] In 1949 Shannon completed a paper (published in March 1950) which estimates thegame-tree complexityofchess, which is approximately 10120. This number is now often referred to as the "Shannon number", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers tosolving the game of chessusing an exhaustive analysis (i.e.brute force analysis).[102][103] On March 9, 1949, Shannon presented a paper called "Programming a Computer for playing Chess". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published inPhilosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game.[102][104]In 1950, Shannon wrote an article titled "A Chess-Playing Machine",[105]which was published inScientific American.Both papers have had immense influence and laid the foundations for future chess programs.[27][28] His process for having the computer decide on which move to make was aminimaxprocedure, based on anevaluation functionof a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position.Materialwas counted according to the usualchess piece relative value(1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen).[106]He considered some positional factors, subtracting ½ point for eachdoubled pawn,backward pawn, andisolated pawn;mobilitywas incorporated by adding 0.1 point for each legal move available. Shannon formulated a version ofKerckhoffs' principleas "The enemy knows the system". In this form it is known as "Shannon's maxim". Shannon also contributed tocombinatoricsanddetection theory.[107]His 1948 paper introduced many tools used in combinatorics. He did work on detection theory in 1944, with his work being one of the earliest expositions of the “matched filter” principle.[107] He was known as a successful investor who gave lectures on investing. A report fromBarron'son August 11, 1986, detailed the recent performance of 1,026 mutual funds, and Shannon achieved a higher return than 1,025 of them. Comparing the portfolio of Shannon from the late 1950s to 1986, toWarren Buffett's of 1965 to 1995, Shannon had a return of about 28% percent, compared to 27% for Buffett.[108]One such method of Shannon's was labeledShannon's demon, which was to form a portfolio of equal parts cash and a stock, and rebalance regularly to take advantage of the stock's randomly jittering price movements.[109]Shannon reportedly long thought of publishing about investing, but ultimately did not, despite giving multiple lectures.[109]He was one of the first investors to download stock prices, and a snapshot of his portfolio in 1981 was found to be $582,717.50, translating to $1.5 million in 2015, excluding another one of his stocks.[109] The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by theAlan Turing Year. An ad hoc committee of theIEEE Information Theory SocietyincludingChristina Fragouli, Rüdiger Urbanke,Michelle Effros, Lav Varshney andSergio Verdú,[110]coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem[111][112]and the IEEE Information Theory Society newsletter.[113] A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society.[114] Some of the activities included: TheClaude E. Shannon Awardwas established in his honor; he was also its first recipient, in 1973.[122][123]
https://en.wikipedia.org/wiki/Claude_Shannon
The following are examples to supplement the articleTuring machine. The following table is Turing's very first example (Turing 1937): With regard to what actions the machine actually does, Turing (1936)[2]states the following: He makes this very clear when he reduces the above table to a single instruction called "b",[3]but his instruction consists of 3 lines. Instruction "b" has three different symbol possibilities {None, 0, 1}. Each possibility is followed by a sequence of actions until we arrive at the rightmost column, where the final m-configuration is "b": As observed by a number of commentators including Turing (1937) himself, (e.g., Post (1936), Post (1947), Kleene (1952), Wang (1954)) the Turing instructions are not atomic — further simplifications of the model can be made without reducing its computational power; see more atPost–Turing machine. As stated in the articleTuring machine, Turing proposed that his table be further atomized by allowing only a single print/erase followed by a single tape movement L/R/N. He gives us this example of the first little table converted:[4] Turing's statement still implies five atomic operations. At a given instruction (m-configuration) the machine: Because a Turing machine's actions are not atomic, a simulation of the machine must atomize each 5-tuple into a sequence of simpler actions. One possibility — used in the following examples of "behaviors" of his machine — is as follows: So-called "canonical"finite-state machinesdo the symbol tests "in parallel"; see more atmicroprogramming. In the following example of what the machine does, we will note some peculiarities of Turing's models: The convention of writing the figures only on alternate squares is very useful: I shall always make use of it.[2] Thus when printing he skips every other square. The printed-on squares are called F-squares; the blank squares in between may be used for "markers" and are called "E-squares" as in "liable to erasure." The F-squares in turn are his "Figure squares" and will only bear the symbols 1 or 0 — symbols he called "figures" (as in "binary numbers"). In this example the tape starts out "blank", and the "figures" are then printed on it. For brevity only the table states are shown here: The same "run" with all the intermediate tape-printing and movements is shown here: A close look at the table reveals certain problems with Turing's own example—not all the symbols are accounted for. For example, suppose his tape was not initially blank. What would happen? The Turing machine would read different values than the intended values. This is a very important subroutine used in the "multiply" routine. The example Turing machine handles a string of 0s and 1s, with 0 represented by the blank symbol. Its task is to double any series of 1s encountered on the tape by writing a 0 between them. For example, when the head reads "111", it will write a 0, then "111". The output will be "1110111". In order to accomplish its task, this Turing machine will need only 5 states of operation, which are called {s1, s2, s3, s4, s5}. Each state does 4 actions: (current instruction) (next instruction) Print Operation:Prints symbol S orErases or doesNothing A "run" of the machine sequences through 16 machine-configurations (aka Turing states): The behavior of this machine can be described as a loop: it starts out in s1, replaces the first 1 with a 0, then uses s2to move to the right, skipping over 1s and the first 0 encountered. s3then skips over the next sequence of 1s (initially there are none) and replaces the first 0 it finds with a 1. s4moves back to the left, skipping over 1s until it finds a 0 and switches to s5. s5then moves to the left, skipping over 1s until it finds the 0 that was originally written by s1. It replaces that 0 with a 1, moves one position to the right and enters s1again for another round of the loop. This continues until s1finds a 0 (this is the 0 in the middle of the two strings of 1s) at which time the machine halts. Another description sees the problem as how to keep track of how many "1"s there are. We can't use one state for each possible number (a state for each of 0,1,2,3,4,5,6 etc), because then we'd need infinite states to represent all the natural numbers, and the state machine isfinite- we'll have to track this using the tape in some way. The basic way it works is by copying each "1" to the other side, by moving back and forth - it is intelligent enough to remember which part of the trip it is on. In more detail, it carries each "1" across to the other side, by recognizing the separating "0" in the middle, and recognizing the "0" on the other side to know it's reached the end. It comes back using the same method, detecting the middle "0", and then the "0" on the original side. This "0" on the original side is the key to the puzzle of how it keeps track of the number of 1's. The trick is that before carrying the "1", it marks that digit as "taken" by replacing it with an "0". When it returns, it fills that "0" back in with a "1",then moves on to the next one, marking it with an "0" and repeating the cycle, carrying that "1" across and so on.With each trip across and back, the marker "0" moves one step closer to the centre. This is how it keeps track of how many "1"'s it has taken across. When it returns, the marker "0" looks like the end of the collection of "1"s to it - any "1"s that have already been taken across are invisible to it (on the other side of the marker "0") and so it is as if it is working on an (N-1) number of "1"s - similar to a proof bymathematical induction. A full "run" showing the results of the intermediate "motions". The following Turing table of instructions was derived from Peterson.[5]Peterson moves the head; in the following model the tape moves. The "state" drawing of the 3-state busy beaver shows the internal sequences of events required to actually perform "the state". As noted above Turing (1937) makes it perfectly clear that this is the proper interpretation of the 5-tuples that describe the instruction.[1]For more about the atomization of Turing 5-tuples seePost–Turing machine: The following table shows the "compressed" run — just the Turing states: The full "run" of the 3-state busy beaver. The resulting Turing-states (what Turing called the "m-configurations" — "machine-configurations") are shown highlighted in grey in column A, and also under the machine's instructions (columns AF-AU)):
https://en.wikipedia.org/wiki/Turing_machine_examples
ATuringtarpit(orTuring tar-pit) is anyprogramming languageorcomputer interfacethat allows for flexibility in function but is difficult to learn and use because it offers little or no support for common tasks.[1]The phrase was coined in 1982 byAlan Perlisin theEpigrams on Programming:[2] 54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy. In anyTuring completelanguage, it is possible to write any computer program, so in a very rigorous sense nearly all programming languages are equally capable. However, having that theoretical ability is not the same as usefulness in practice. Turing tarpits are characterized by having a simpleabstract machinethat requires the user to deal with many details in the solution of a problem.[3]At the extreme opposite are interfaces that can perform very complex tasks with little human intervention but become obsolete if requirements change slightly. Someesoteric programming languages, such asBrainfuckorMalbolge, are specifically referred to as "Turing tarpits"[4]because they deliberately implement the minimum functionality necessary to be classified as Turing complete languages. Using such languages is a form ofmathematical recreation: programmers can work out how to achieve basic programming constructs in an extremely difficult but mathematically Turing-equivalent language.[5]
https://en.wikipedia.org/wiki/Turing_tarpit