text
stringlengths
21
172k
source
stringlengths
32
113
TheGovernment Security Classifications Policy(GSCP) is a system for classifyingsensitive government datain theUnited Kingdom. Historically, theGovernment Protective Marking Schemewas used by government bodies in the UK; it divides data into UNCLASSIFIED, PROTECT, RESTRICTED, CONFIDENTIAL, SECRET and TOP SECRET. This system was designed for paper-based records; it is not easily adapted to modern government work and is not widely understood.[1] The GSCP uses three levels of classification: OFFICIAL, SECRET and TOP SECRET.[2]This is simpler than the old model and there is no direct relationship between the old and new classifications. "Unclassified" is deliberately omitted from the new model. Government bodies are not expected to automatically remark existing data, so there may be cases where organisations working under the new system still handle some data marked according to the old system. Information Asset Owners continue to be responsible for information. The new policy does not specify particular IT security requirements – IT systems should be built and used in accordance with existing guidance fromCESG.[3] Everybody who works with government – including contractors and suppliers – is responsible for protecting information they work with, regardless of whether it has a protective marking. Aggregation does not automatically trigger an increase in protective marking. For instance, a database with thousands of records which are individually OFFICIAL should not be relabeled as a SECRET database. Instead, information owners are expected to make decisions aboutcontrolsbased on arisk assessment, and should consider what the aggregated information is,who needs to accessit, and how. OFFICIAL includes most public-sector data, including a wide range of information on day-to-day government business. It is not subject to any special risks. Personal data would usually be OFFICIAL.[4]The data should be protected bycontrolsbased on commercial best practice instead of expensive, difficult specialist technology and bureaucracy. There is no requirement to mark every document as "OFFICIAL" – it is understood that this is the default for government documents.[5] Organisations may add "descriptors" to highlight particular types of official data, for instance commercially sensitive information about contracts, or diplomatic data which should not be seen by locally hired embassy staff. These descriptors do not automatically require special controls. "OFFICIAL" will usually include the kinds of data that were previously UNCLASSIFIED, RESTRICTED, or CONFIDENTIAL; but this may vary. The threat model for OFFICIAL data is similar to typical large private-sector organisations; it anticipates that individual hackers, pressure groups, criminals, and investigative journalists might attempt to get information. The threat model does not guarantee protection against very persistent and skilled attacks, for instance by organised crime groups or by foreign governments; these are possible, but normal controls would make them more difficult, and much stronger controls would be disproportionate. People with routine access to OFFICIAL information should be subject toBPSSscreening. OFFICIAL may include data which is subject to separate regulatory requirements, such as theData Protection Act(personal data) orPCI DSS(card payments). OFFICIAL-SENSITIVE is an additional caveat for OFFICIAL data where it is particularly important to enforceneed to knowrules. OFFICIAL-SENSITIVE documents should be marked, but they are not necessarily tracked. It is not a classification.[6]‘Sensitive’ is a handling caveat for a small subset of information marked OFFICIAL that require special handling by staff. "Very sensitive information", which might (for example) seriously harm national defence or crime investigations. Data should only be marked as SECRET if the Senior Information Risk Owner (which is a board level position in an organisation) agrees that it is high-impactandthat the data must be protected against very capable attackers. Although some specialist technology might be used to protect the data, there is still strong emphasis on reuse of commercial security tools. SECRET is a big step up from OFFICIAL; government bodies are warned against being overcautious and applying much stricter rules when OFFICIAL would be sufficient. People with routine access to SECRET information should usually haveSC clearance. SECRET data may often be exempt fromFOIAdisclosure. Data with exceptionally high impact levels; compromise would have very serious impacts – for instance, many deaths. This requires an extremely high level of protection, and controls are expected to be similar to those used on existing "Top Secret" data, including CESG-approved products. Very little risk can be tolerated in TOP SECRET, although no activity is completely risk-free.[7] People with routine access to TOP SECRET information should usually haveDV clearance. TOP SECRET information is assumed to be exempt fromFOIAdisclosure. Disclosure of such information is assumed to be above the threshold forOfficial Secrets Actprosecution.[8] Special handling instructions are additional markings which used in conjunction with a classification marking to indicate the nature or source of its content, limit access to designated groups, and / or to signify the need for enhanced handling measures. In addition to a paragraph near the start of the document special handling instructions include Descriptors, Codewords, Prefixes and national caveats.[2] A DESCRIPTOR is used with the security classification to identify certain categories of sensitive information and indicates the need for common sense precautions to limit access. The normal descriptors are 'COMMERCIAL’, 'LOCSEN’ and 'PERSONAL’.[2] A Codeword is a single word expressed in CAPITAL letters that follows the security classification to providing security cover for a particular asset or event. They are usually only applied to SECRET and TOP SECRET assets.[2] The UK prefix is added to the security classification of all assets sent to foreign governments or international organisations. This prefix designates the UK as the originating country and that the British Government should be consulted before any possible disclosure.[2] National caveats follow the security classification. Unless explicitly named, information bearing a national caveat is not sent to foreign governments, overseas contractors, international organisations or released to any foreign nationals.[2]Example With the exception of British Embassies and Diplomatic Missions or Service units or establishments, assets bearing the UK EYES ONLY national caveat are not sent overseas.[2] As per the previous GPMS model, the choice of classification relates only to the data's confidentiality. Unlike the old model it replaces however, the GSCP does not consider the consequence of a compromise as the primary factor, but instead is based on the capability and motivation of potential threat actors (attackers) and the acceptability of that risk to the business. Where a capable and motivated attacker such as a Foreign Intelligence Service, or Serious and Organised Crime are considered to be in scope of the data to be classified, the business must implicitly accept this risk to classify the data as OFFICIAL. If they do not or cannot accept this risk they must at least initially consider the data to be SECRET, though it may be reduced to OFFICIAL or increased to TOP SECRET later when the consequences of a compromise are also considered. The implication of this approach and the binary nature of determining if a risk from capable and motivated attackers is acceptable or not, means that data cannot easily progress through the GSCP in a linear fashion as it did through GPMS. This is a complexity often lost on Information Asset Owners previously used to the strictly hierarchical tiered rising structure of GPMS (e.g. UNCLASSIFIED, PROTECT, RESTRICTED, CONFIDENTIAL, SECRET, TOP SECRET). By contrast GSCP data starts either with an OFFICIALORSECRET classification depending on the nature of threat and its acceptability to the business, and thereafter moves up or down accordingly based on consequence of compromise. OFFICIAL data may therefore rise to TOP SECRET, but cannot be SECRET unless the risk previously accepted for a capable attacker is revised. SECRET data may be reduced to OFFICIAL where no serious consequences can be identified from a potential breach, or SECRET can also rise to TOP SECRET if serious consequences could arise. Impact levels also consider integrity and availability, but CESG's system of Business Impact Levels (BIL) is under review too and in most practical contexts have now fallen into disuse. It is therefore no longer strictly the case that the greater the consequences if the data confidentiality were to be compromised, the higher the classification, since data with a high impact (including material which could result in threat to life) may still be classified as OFFICIAL if the relevant business owner believes it is not necessary to protect this from an attacker who has the capabilities of a Foreign Intelligence Service or Serious and Organised Crime. Conversely some data with much lower consequences (for example ongoing Police investigations into a criminal group, or intelligence information relating to possible prosecutions) but where the business will not accept compromise from such an attacker could be classified as SECRET. Guidance issued in April 2014 at the implementation of the GSCP and still available on Gov.UK sources[9]suggested that UK Government information systems would continue to be accredited much as before, normally using CESGInformation Assurance Standard 1 & 2. This has however been progressively discarded through GDS and NCSC blog statements since May 2014 and the IS1 & 2 standard itself is no longer maintained or mandated. Accreditation has also been largely replaced by alternative models of assurance aligned to various commercial practices. The NAO report "Protecting Information across Government" (Sep 2016) was somewhat critical of the move to this model and the adoption of GSCP overall[10] Existing published guidance continues to suggest that storage media which hold UK government data should still be destroyed or purged according toHMG IA Policy No. 5, however terminology in this guidance and other material has not been updated fully to reflect the changes from GPMS protective markings to GSCP classifications and as such its value is now arguably somewhat reduced as a published standard. Higher classifications still tend to require stricterpersonnel vetting. The Government Security Classifications Policy was completed and published in December 2012; additional guidance and supporting processes were developed over time. The policy came into effect on 2 April 2014.Government procurementprocedures took account of the new policy from 21 October 2023 so that new security requirements could be taken into account in contracts let from that date.[11]
https://en.wikipedia.org/wiki/Government_Security_Classifications_Policy
ATCP sequence prediction attackis an attempt to predict the sequence number used to identify thepacketsin aTCP connection, which can be used to counterfeit packets.[1] The attacker hopes to correctly guess the sequence number to be used by thesending host. If they can do this, they will be able to send counterfeit packets to the receiving host which will seem to originate from the sending host, even though the counterfeit packets may in fact originate from some third host controlled by the attacker. One possible way for this to occur is for the attacker to listen to the conversation occurring between the trusted hosts, and then to issue packets using the same sourceIP address. By monitoring the traffic before an attack is mounted, the malicious host can figure out the correct sequence number. After the IP address and the correct sequence number are known, it is basically a race between the attacker and the trusted host to get the correct packet sent. One common way for the attacker to send it first is to launch another attack on the trusted host, such as adenial-of-service attack. Once the attacker has control over the connection, they are able to send counterfeit packets without getting a response.[2] If an attacker can cause delivery of counterfeit packets of this sort, they may be able to cause various sorts of mischief, including the injection into an existing TCP connection of data of the attacker's choosing, and the premature closure of an existing TCP connection by the injection of counterfeit packets with the RST bit set, aTCP reset attack. Theoretically, other information such as timing differences or information from lowerprotocol layerscould allow the receiving host to distinguish authentic TCP packets from the sending host and counterfeit TCP packets with the correct sequence number sent by the attacker. If such other information is available to the receiving host, if the attacker can also fake that other information, and if the receiving host gathers and uses the information correctly, then the receiving host may be fairly immune to TCP sequence prediction attacks. Usually, this is not the case, so the TCP sequence number is the primary means of protection of TCP traffic against these types of attack. Another solution to this type of attack is to configure anyrouterorfirewallto not allow packets to come in from an external source but with an internal IP address. Although this does not fix the attack, it will prevent the potential attacks from reaching their targets.[2]
https://en.wikipedia.org/wiki/TCP_sequence_prediction_attack
Incomputing, aplug and play(PnP) device orcomputer busis one with a specification that facilitates the recognition of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts.[1][2]The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies.[3][4] Expansion devices are controlled and exchange data with the host system through defined memory orI/Ospace port addresses,direct memory accesschannels,interrupt requestlines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of amotherboardorbackplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses.[5]As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included theMSXstandard,NuBus,AmigaAutoconfig, and IBM Microchannel. Initially allexpansion cardsfor theIBM PCrequired physical selection of I/O configuration on the board with jumper straps orDIP switches, but increasinglyISA busdevices were arranged for software configuration.[6]By 1995,Microsoft Windowsincluded a comprehensive method of enumerating hardware at boot time and allocatingresources, which was called the "Plug and Play" standard.[7] Plug and play devices can have resources allocated at boot-time only, or may behotplugsystems such asUSBandIEEE 1394(FireWire).[8] Some early microcomputer peripheral devices required the end user physically to cut some wires and solder together others in order to make configuration changes;[9]such changes were intended to be largely permanent for the life of the hardware. As computers became more accessible to the general public, the need developed for more frequent changes to be made by computer users unskilled with using soldering irons. Rather than cutting and soldering connections, configuration was accomplished byjumpersorDIP switches. Later on this configuration process was automated: Plug and Play.[6] TheMSXsystem, released in 1983,[10]was designed to be plug and play from the ground up, and achieved this by a system of slots and subslots, where each had its ownvirtual address space, thus eliminating device addressing conflicts in its very source. No jumpers or any manual configuration was required, and the independent address space for each slot allowed very cheap and commonplace chips to be used, alongside cheapglue logic. On the software side, the drivers and extensions were supplied in the card's own ROM, thus requiring no disks or any kind of user intervention to configure the software. The ROM extensionsabstracted any hardware differencesand offered standard APIs as specified byASCII Corporation. In 1984, theNuBusarchitecture was developed by the Massachusetts Institute of Technology (MIT)[11]as a platform agnostic peripheral interface that fully automated device configuration. The specification was sufficiently intelligent that it could work with bothbig endianandlittle endiancomputer platforms that had previously been mutually incompatible. However, this agnostic approach increased interfacing complexity and required support chips on every device which in the 1980s was expensive to do, and apart from its use inAppleMacintoshesandNeXTmachines, the technology was not widely adopted. In 1984, Commodore developed theAutoconfigprotocol and the Zorro expansion bus for itsAmigaline of expandable computers. The first public appearance was in the CES computer show at Las Vegas in 1985, with the so-called "Lorraine" prototype. Like NuBus, Zorro devices had absolutely no jumpers or DIP switches. Configuration information was stored on a read-only device on each peripheral, and at boot time the host system allocated the requested resources to the installed card. The Zorro architecture did not spread to general computing use outside of the Amiga product line, but was eventually upgraded asZorro IIandZorro IIIfor the later iteration of Amiga computers. In 1987, IBM released an update to theIBM PCknown as thePersonal System/2line of computers using theMicro Channel Architecture.[12]The PS/2 was capable of totally automatic self-configuration. Every piece of expansion hardware was issued with a floppy disk containing a special file used toauto-configurethe hardware to work with the computer. The user would install the device, turn on the computer, load the configuration information from the disk, and the hardware automatically assigned interrupts, DMA, and other needed settings. However, the disks posed a problem if they were damaged or lost, as the only options at the time to obtain replacements were via postal mail or IBM's dial-upBBSservice. Without the disks, any new hardware would be completely useless and the computer would occasionally not boot at all until the unconfigured device was removed. Micro Channel did not gain widespread support,[13]because IBM wanted to exclude clone manufacturers from this next-generation computing platform. Anyone developing for MCA had to sign non-disclosure agreements and pay royalties to IBM for each device sold, putting a price premium on MCA devices. End-users and clone manufacturers revolted against IBM and developed their own open standards bus, known as EISA. Consequently, MCA usage languished except in IBM's mainframes. In time, manyIndustry Standard Architecture(ISA) cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often, the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on, but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware. ISA PnP or (legacy) Plug & Play ISA was a plug-and-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. It was superseded by thePCIbus during the mid-1990s. The PCI plug and play (autoconfiguration) is based on the PCI BIOS Specification in 1990s, the PCI BIOS Specification is superseded by theACPIin 2000s. In 1995, Microsoft releasedWindows 95, which tried to automate device detection and configuration as much as possible, but could still fall back to manual settings if necessary. During the initial install process of Windows 95, it would attempt to automatically detect all devices installed in the system. Since full auto-detection of everything was a new process without full industry support, the detection process constantly wrote to a progress tracking log file during the detection process. In the event that device probing would fail and the system would freeze, the end-user could reboot the computer, restart the detection process, and the installer would use the tracking log to skip past the point that caused the previous freeze.[14] At the time, there could be a mix of devices in a system, some capable of automatic configuration, and some still using fully manual settings via jumpers and DIP switches. The old world of DOS still lurked underneath Windows 95, and systems could be configured to load devices in three different ways: Microsoft could not assert full control over all device settings, so configuration files could include a mix of driver entries inserted by the Windows 95 automatic configuration process, and could also include driver entries inserted or modified manually by the computer users themselves. The Windows 95 Device Manager also could offer users a choice of several semi-automatic configurations to try to free up resources for devices that still needed manual configuration. Also, although some later ISA devices were capable of automatic configuration, it was common for PC ISA expansion cards to limit themselves to a very small number of choices for interrupt request lines. For example, a network interface might limit itself to only interrupts 3, 7, and 10, while a sound card might limit itself to interrupts 5, 7, and 12. This results in few configuration choices if some of those interrupts are already used by some other device. The hardware of PC computers additionally limited device expansion options because interrupts could not be shared, and some multifunction expansion cards would use multiple interrupts for different card functions, such as a dual-port serial card requiring a separate interrupt for each serial port. Because of this complex operating environment, the autodetection process sometimes produced incorrect results, especially in systems with large numbers of expansion devices. This led to device conflicts within Windows 95, resulting in devices which were supposed to be fully self-configuring failing to work. The unreliability of the device installation process led to Plug and Play being sometimes referred to asPlug and Pray.[15] Until approximately 2000, PC computers could still be purchased with a mix of ISA and PCI slots, so it was still possible that manual ISA device configuration might be necessary. But with successive releases of new operating systems like Windows 2000 and Windows XP, Microsoft had sufficient clout to say that drivers would no longer be provided for older devices that did not support auto-detection. In some cases, the user was forced to purchase new expansion devices or a whole new system to support the next operating system release. Several completely automated computer interfaces are currently used, each of which requires no device configuration or other action on the part of the computer user, apart from software installation, for the self-configuring devices. These interfaces include: For most of these interfaces, very little technical information is available to the end user about the performance of the interface. Although both FireWire and USB have bandwidth that must be shared by all devices, most modern operating systems are unable to monitor and report the amount of bandwidth being used or available, or to identify which devices are currently using the interface.[citation needed]
https://en.wikipedia.org/wiki/Plug_and_play
Apolice statedescribes astatewhosegovernmentinstitutions exercise an extreme level of control overcivil societyandliberties. There is typically little or no distinction between the law and the exercise ofpolitical powerby theexecutive, and the deployment ofinternal securityandpoliceforces play a heightened role ingovernance. A police state is a characteristic ofauthoritarian,totalitarianorilliberal regimes(contrary to aliberal democratic regime). Such governments are typicallyone-party statesanddominant-party states, but police-state-level control may emerge inmulti-party systemsas well. Originally, a police state was a state regulated by acivil administration, but since the beginning of the 20th century it has "taken on an emotional and derogatory meaning" by describing an undesirable state of living characterized by the overbearing presence of civil authorities.[1]The inhabitants of a police state may experience restrictions ontheir mobility, or on their freedom to express or communicate political or other views, which are subject to police monitoring or enforcement. Political control may be exerted by means of asecret policeforce that operates outside the boundaries normally imposed by aconstitutional state.[2]Robert von Mohl, who first introduced the rule of law to Germanjurisprudence, contrasted theRechtsstaat("legal" or "constitutional" state) with the anti-aristocraticPolizeistaat("police state").[3] TheOxford English Dictionarytraces the phrase "police state" back to 1851, when it was used in reference to the use of a national police force to maintain order in theAustrian Empire.[4]The German termPolizeistaatcame into English usage in the 1930s with reference to totalitarian governments that had begun to emerge in Europe.[5] Because there are different political perspectives as to what an appropriate balance is between individual freedom and national security, there are no objective standards defining a police state.[citation needed]This concept can be viewed as a balance or scale. Along this spectrum, any law that has the effect of removing liberty is seen as moving towards a police state while any law that limits government oversight of the populace is seen as moving towards afree state.[6] Anelectronic police stateis one in which the government aggressively uses electronic technologies to record, organize, search and distribute forensic evidence against its citizens.[7][8] Early forms of police states can be found in ancient China. During the rule ofKing Li of Zhouin the 9th century BC, there was strict censorship, extensive state surveillance, and frequent executions of those who were perceived to be speaking against the regime. During this reign of terror, ordinary people did not dare to speak to each other on the street, and only made eye contacts with friends as a greeting, hence known as '道路以目'. Subsequently, during the short-lived Qin Dynasty, the police state became far more wide-reaching than its predecessors. In addition to strict censorship and the burning of all political and philosophical books, the state implemented strict control over its population by using collective executions and by disarming the population. Residents were grouped into units of 10 households, with weapons being strictly prohibited, and only one kitchen knife was allowed for 10 households. Spying and snitching was in common place, and failure to report any anti-regime activities was treated the same as if the person participated in it. If one person committed any crime against the regime, all 10 households would be executed.[citation needed] Some have characterised the rule ofKing Henry VIIIduring theTudor periodas a police state.[9][10]TheOprichninaestablished byTsar Ivan IVwithin theRussian Tsardomin 1565 functioned as a predecessor to the modern police state, featuring persecutions andautocratic rule.[11][12] Nazi Germanyemerged from an originallydemocratic government, yet gradually exerted more and more repressive controls over its people in the lead-up toWorld War II. In addition to theSSand theGestapo, the Nazi police state used the judiciary to assert control over the population from the 1930s until the end of the war in 1945.[13] During the period ofapartheid,South Africamaintained police-state attributes such as banning people and organizations, arrestingpolitical prisoners, maintaining segregated living communities and restricting movement and access.[14] Augusto Pinochet'sChileoperated as a police state,[15]exhibiting "repression of public liberties, the elimination of political exchange, limiting freedom of speech, abolishing the right to strike, freezing wages".[16] TheRepublic of Cubaunder President (and later right-wing dictator)Fulgencio Batistawas anauthoritarianpolice state until his overthrow during theCuban Revolutionin 1959 with the rise to power ofFidel Castroand foundation of aMarxist-Leninistrepublic.[17][18][19][20] Following the failedJuly 1958 Haitian coup d'état attemptto overthrow the president,Haitidescended into an autocratic and despoticfamily dictatorshipunder theHaitian Vodoublack nationalistFrançois Duvalier(Papa Doc) and hisNational Unity Party. In 1959, Papa Doc ordered the creation ofTonton Macoutes, a paramilitary force unit whom he authorized to commit systematic violence and human rights abuses to suppress political opposition, including an unknown number of murders, public executions,rapes, disappearances of and attacks on dissidents; an unrestrainedstate terrorism. In the1964 Haitian constitutional referendum, he declared himself thepresident for lifethrough a sham election. After Duvalier's death in 1971, his sonJean-Claude(Baby Doc) succeeded him as the next president for life, continuing the regime until thepopular uprisingthat had him overthrown in February 1986.[citation needed] Ba'athist Syriaunder the dictatorship ofBashar al-Assadwas described[by whom?]as the most "ruthless police state" in theArab world; with a tight system of restrictions on the movement of civilians, independent journalists and other unauthorized individuals. AlongsideNorth KoreaandEritrea, it operated one of the strictestcensorshipmachines that regulated thetransfer of information. The Syrian security apparatus was established in the 1970s byHafez al-Assad, who ran amilitary dictatorshipwith theBa'ath partyas its civilian cover to enforce the loyalty of Syrian populations to theAssad family. The dreadedMukhabaratwas given free hand to terrorise,tortureor murder non-compliant civilians, while public activities of any organized opposition was curbed down with the raw firepower of thearmy. Bashar and hisfamilywere overthrown in December 2024 during theSyrian Revolutionwith theFall of the Assad regimeand theFall of Damascuswhich had forced Bashar to leaveSyriaforRussiaand its capitalMoscowfor a political asylum.[21][22] The region of modern-dayKoreais claimed to have elements of a police state, from theJuche-styleSillakingdom,[23]to the imposition of afascistpolice stateby the Japanese,[23]to thetotalitarianpolice state imposed and maintained by theKim family.[24]In 2006, Paris-basedReporters Without Bordersranked North Korea last or second last in their test of press freedom since thePress Freedom Index's introduction, stating that the ruling Kim family control all of the media.[25][26] In response to government proposals to enact new security measures to curb protests, theAKP-led government ofTurkeyhas been accused of turningTurkeyinto a police state.[27]Since the2013 removalof theMuslim Brotherhood-affiliated formerEgyptian presidentMohamed Morsifrom office, the government ofEgypthas carried out extensive efforts to suppress certain forms of Islamism andreligious extremism(including the aforementioned Muslim Brotherhood),[28][better source needed]leading to accusations that it has effectively become a "revolutionary police state".[29][30] TheUSSRwas a police state.[31]Notable secret police forces in the former USSR were theCheka, theNKVD, and theKGB. Tools of state control used by the Soviet Union includecensorship,forced labourunder theGulagsystem of labour camps,[32]anddeportationandgenocideof ethnic minorities such as in theHolodomor,NKVD Order No. 00485against the Poles andDe-Cossackization.[33]Modern-dayRussia[34][35]andBelarusare often described as police states.[36][37] ThedictatorshipofFerdinand Marcosfrom the 1970s to early 1980s in thePhilippineshad many characteristics of a police state.[38][39] Hong Kong is perceived by some human rights organizations and press to have implemented the tools of a police state after passing theNational Security legislation in 2020, following repeated attempts by People's Republic of China to erode the rule of law in the former British colony.[40][41][42][43][44] TheUnited Stateshas been described as a police state since theelectionof PresidentDonald Trumpin 2024, particularly due to themass deportations of activists—includinggreen cardholders such asMahmoud Khalil—andimmigrantswithout due process or transparency, alongside unidentifiedICEdetentions.[45] Fictional police states have featured in media ranging from novels to films to video games.George Orwell's novel1984describes Britain under thetotalitarianOceanianregime that continuously invokes (and helps to cause) aperpetual war. This perpetual war is used as a pretext for subjecting the people tomass surveillanceand invasive police searches. This novel was described byThe Encyclopedia of Police Scienceas "the definitive fictional treatment of a police state, which has also influenced contemporary usage of the term".[46]
https://en.wikipedia.org/wiki/Police_state
Inprobability theory, aMarkov modelis astochastic modelused tomodelpseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it (that is, it assumes theMarkov property). Generally, this assumption enables reasoning and computation with the model that would otherwise beintractable. For this reason, in the fields ofpredictive modellingandprobabilistic forecasting, it is desirable for a given model to exhibit the Markov property. Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain There are four common Markov models used in different situations, depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: The simplest Markov model is theMarkov chain. It models the state of a system with arandom variablethat changes through time. In this context, the Markov property indicates that the distribution for this variable depends only on the distribution of a previous state. An example use of a Markov chain isMarkov chain Monte Carlo, which uses the Markov property to prove that a particular method for performing arandom walkwill sample from thejoint distribution. Ahidden Markov modelis a Markov chain for which the state is only partially observable or noisily observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, theViterbi algorithmwill compute the most-likely corresponding sequence of states, theforward algorithmwill compute the probability of the sequence of observations, and theBaum–Welch algorithmwill estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is forspeech recognition, where the observed data is thespeech audiowaveformand the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio. AMarkov decision processis a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. Apartially observable Markov decision process(POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to beNP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.[1] AMarkov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner. Hierarchical Markov models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are theHierarchical hidden Markov model[2]and the Abstract Hidden Markov Model.[3]Both have been used for behavior recognition[4]and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.[3][5] A Tolerant Markov model (TMM) is a probabilistic-algorithmic Markov chain model.[6]It assigns the probabilities according to a conditioning context that considers the last symbol, from the sequence to occur, as the most probable instead of the true occurring symbol. A TMM can model three different natures: substitutions, additions or deletions. Successful applications have been efficiently implemented in DNA sequences compression.[6][7] Markov-chains have been used as a forecasting methods for several topics, for example price trends,[8]wind power[9]andsolar irradiance.[10]The Markov-chain forecasting models utilize a variety of different settings, from discretizing the time-series[9]to hidden Markov-models combined with wavelets[8]and the Markov-chain mixture distribution model (MCM).[10]
https://en.wikipedia.org/wiki/Markov_model
Intelecommunications,direct-sequence spread spectrum(DSSS) is aspread-spectrummodulationtechnique primarily used to reduce overall signalinterference. The direct-sequence modulation makes the transmitted signal wider in bandwidth than the information bandwidth. After the despreading or removal of the direct-sequence modulation in the receiver, the information bandwidth is restored, while the unintentional and intentional interference is substantially reduced.[1] Swissinventor,Gustav Guanellaproposed a "means for and method of secret signals".[2]With DSSS, the message symbols are modulated by a sequence of complex values known asspreading sequence. Each element of the spreading sequence, a so-calledchip, has a shorter duration than the original message symbols. The modulation of the message symbols scrambles and spreads the signal in the spectrum, and thereby results in a bandwidth of the spreading sequence. The smaller the chip duration, the larger the bandwidth of the resulting DSSS signal; more bandwidth multiplexed to the message signal results in better resistance against narrowband interference.[1][3] Some practical and effective uses of DSSS include thecode-division multiple access(CDMA) method, theIEEE 802.11bspecification used inWi-Finetworks, and theGlobal Positioning System.[4][5] Direct-sequence spread-spectrum transmissions multiply the symbol sequence being transmitted with a spreading sequence that has a higher rate than the original message rate. Usually, sequences are chosen such that the resulting spectrum is spectrallywhite. Knowledge of the same sequence is used to reconstruct the original data at the receiving end. This is commonly implemented by the element-wise multiplication with the spreading sequence, followed by summation over a message symbol period. This process,despreading, is mathematically acorrelationof the transmitted spreading sequence with the spreading sequence. In an AWGN channel, the despreaded signal'ssignal-to-noise ratiois increased by the spreading factor, which is the ratio of the spreading-sequence rate to the data rate. While a transmitted DSSS signal occupies a wider bandwidth than the direct modulation of the original signal would require, its spectrum can be restricted by conventionalpulse-shape filtering. If an undesired transmitter transmits on the same channel but with a different spreading sequence, the despreading process reduces the power of that signal. This effect is the basis for thecode-division multiple access(CDMA) method of multi-user medium access, which allows multiple transmitters to share the same channel within the limits of thecross-correlationproperties of their spreading sequences.
https://en.wikipedia.org/wiki/Direct-sequence_spread_spectrum
White-box testing(also known asclear box testing,glass box testing,transparent box testing, andstructural testing) is a method ofsoftware testingthat tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be applied at theunit,integrationandsystemlevels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven,[1]that is, drivenexclusivelyby agreed specifications of how each component of software is required to behave (as inDO-178CandISO 26262processes), white-box test techniques can accomplish assessment for unimplemented or missing requirements. White-box test design techniques include the followingcode coveragecriteria: White-box testing is a method of testing the application at the level of the source code. These test cases are derived through the use of the design techniques mentioned above:control flowtesting, data flow testing, branch testing, path testing, statement coverage and decision coverage as well as modified condition/decision coverage. White-box testing is the use of these techniques as guidelines to create an error-free environment by examining all code. These white-box testing techniques are the building blocks of white-box testing, whose essence is the careful testing of the application at the source code level to reduce hidden errors later on.[2]These different techniques exercise every visible path of the source code to minimize errors and create an error-free environment. The whole point of white-box testing is the ability to know which line of the code is being executed and being able to identify what the correct output should be.[2] White-box testing's basic procedures require the tester to have an in-depth knowledge of the source code being tested. The programmer must have a deep understanding of the application to know what kinds of test cases to create so that every visible path is exercised for testing. Once the source code is understood then it can be analyzed for test cases to be created. The following are the three basic steps that white-box testing takes in order to create test cases: A more modern view is that the dichotomy between white-box testing and black-box testing has blurred and is becoming less relevant. Whereas "white-box" originally meant using the source code, and black-box meant using requirements, tests are now derived from many documents at various levels of abstraction. The real point is that tests are usually designed from an abstract structure such as the input space, a graph, or logical predicates, and the question is what level of abstraction we derive that abstract structure from.[4]That can be the source code, requirements, input space descriptions, or one of dozens of types of design models. Therefore, the "white-box / black-box" distinction is less important and the terms are less relevant.[citation needed] Inpenetration testing, white-box testing refers to a method where awhite hat hackerhas full knowledge of the system being attacked.[6]The goal of a white-box penetration test is to simulate a malicious insider who has knowledge of and possibly basic credentials for the target system. For such a penetration test, administrative credentials are typically provided in order to analyse how or which attacks can impact high-privileged accounts.[7]Source code can be made available to be used as a reference for the tester. When the code is a target of its own, this is not (only) a penetration test but asource code security audit(or security review).[8]
https://en.wikipedia.org/wiki/White_box_testing
Neural network softwareis used tosimulate,research,develop, and applyartificial neural networks, software concepts adapted frombiological neural networks, and in some cases, a wider array ofadaptive systemssuch asartificial intelligenceandmachine learning. Neural network simulators are software applications that are used to simulate the behavior of artificial or biological neural networks. They focus on one or a limited number of specific types of neural networks. They are typically stand-alone and not intended to produce general neural networks that can be integrated in other software. Simulators usually have some form of built-invisualizationto monitor the training process. Some simulators also visualize the physical structure of the neural network. Historically, the most common type of neural network software was intended for researching neural network structures and algorithms. The primary purpose of this type of software is, through simulation, to gain a better understanding of the behavior and the properties of neural networks. Today in the study of artificial neural networks, simulators have largely been replaced by more general component based development environments as research platforms. Commonly used artificial neural network simulators include theStuttgart Neural Network Simulator(SNNS), andEmergent. In the study of biological neural networks however, simulation software is still the only available approach. In such simulators the physical biological and chemical properties of neural tissue, as well as the electromagnetic impulses between the neurons are studied. Commonly used biological network simulators includeNeuron,GENESIS,NESTandBrian. Unlike the research simulators, data analysis simulators are intended for practical applications of artificial neural networks. Their primary focus is on data mining and forecasting. Data analysis simulators usually have some form of preprocessing capabilities. Unlike the more general development environments, data analysis simulators use a relatively simple static neural network that can be configured. A majority of the data analysis simulators on the market use backpropagating networks or self-organizing maps as their core. The advantage of this type of software is that it is relatively easy to use.Neural Designeris one example of a data analysis simulator. When theParallel Distributed Processingvolumes[1][2][3]were released in 1986-87, they provided some relatively simple software. The original PDP software did not require any programming skills, which led to its adoption by a wide variety of researchers in diverse fields. The original PDP software was developed into a more powerful package called PDP++, which in turn has become an even more powerful platform calledEmergent. With each development, the software has become more powerful, but also more daunting for use by beginners. In 1997, the tLearn software was released to accompany a book.[4]This was a return to the idea of providing a small, user-friendly, simulator that was designed with the novice in mind. tLearn allowed basic feed forward networks, along with simple recurrent networks, both of which can be trained by the simple back propagation algorithm. tLearn has not been updated since 1999. In 2011, the Basic Prop simulator was released. Basic Prop is a self-contained application, distributed as a platform neutral JAR file, that provides much of the same simple functionality as tLearn. Development environments for neural networks differ from the software described above primarily on two accounts – they can be used to develop custom types of neural networks and they supportdeploymentof the neural network outside the environment. In some cases they have advancedpreprocessing, analysis and visualization capabilities. A more modern type of development environments that are currently favored in both industrial and scientific use are based on acomponent based paradigm. The neural network is constructed by connecting adaptive filter components in a pipe filter flow. This allows for greater flexibility as custom networks can be built as well as custom components used by the network. In many cases this allows a combination of adaptive and non-adaptive components to work together. The data flow is controlled by a control system which is exchangeable as well as the adaptation algorithms. The other important feature is deployment capabilities. With the advent of component-based frameworks such as.NETandJava, component based development environments are capable of deploying the developed neural network to these frameworks as inheritable components. In addition some software can also deploy these components to several platforms, such asembedded systems. Component based development environments include:PeltarionSynapse,NeuroDimensionNeuroSolutions,Scientific SoftwareNeuro Laboratory, and theLIONsolverintegrated software. Freeopen sourcecomponent based environments includeEncogandNeuroph. A disadvantage of component-based development environments is that they are more complex than simulators. They require more learning to fully operate and are more complicated to develop. The majority implementations of neural networks available are however custom implementations in various programming languages and on various platforms. Basic types of neural networks are simple to implement directly. There are also manyprogramming librariesthat contain neural network functionality and that can be used in custom implementations (such asTensorFlow,Theano, etc., typically providing bindings to languages such asPython,C++,Java). In order for neural network models to be shared by different applications, a common language is necessary. ThePredictive Model Markup Language(PMML) has been proposed to address this need. PMML is an XML-based language which provides a way for applications to define and share neural network models (and other data mining models) between PMML compliant applications. PMML provides applications a vendor-independent method of defining models so that proprietary issues and incompatibilities are no longer a barrier to the exchange of models between applications. It allows users to develop models within one vendor's application, and use other vendors' applications to visualize, analyze, evaluate or otherwise use the models. Previously, this was very difficult, but with PMML, the exchange of models between compliant applications is now straightforward. A range of products are being offered to produce and consume PMML. This ever-growing list includes the following neural network products:
https://en.wikipedia.org/wiki/Neural_network_software
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Arrow's impossibility theoremis a key result insocial choice theoryshowing that noranking-baseddecision rulefor a group can satisfy the requirements ofrational choice.[1]Specifically,Arrowshowed no such rule can satisfyindependence of irrelevant alternatives, the principle that a choice between two alternativesAandBshould not depend on the quality of some third, unrelated optionC.[2][3][4] The result is often cited in discussions ofvoting rules,[5]where itimpliesnoranked votingrule can eliminate thespoiler effect,[6][7][8]though this was known before Arrow (dating back to theMarquis de Condorcet'svoting paradox, showing the impossibility ofmajority rule). Arrow's theorem generalizes Condorcet's findings, showing the same problems extend to everygroup decision procedurebased onrelative comparisons, including non-majoritarian rules likecollective leadershiporconsensus decision-making.[1] While the impossibility theorem shows all ranked voting rules must have spoilers, the frequency of spoilers differs dramatically by rule.Plurality-rulemethods likechoose-oneandranked-choice (instant-runoff) votingare highly sensitive to spoilers,[9][10]creating them even in some situations (likecenter squeezes) where they are notmathematically necessary.[11][12]By contrast,majority-rule (Condorcet) methodsofranked votinguniquelyminimize the number of spoiled elections[12]by restricting them tovoting cycles,[11]which are rare in ideologically-driven elections.[13][14]Under somemodelsof voter preferences (like the left-right spectrum assumed in themedian voter theorem), spoilers disappear entirely for these methods.[15][16] Rated voting rules, where voters assign a separate grade to each candidate, are not affected by Arrow's theorem.[17][18][19]Arrow initially asserted the information provided by these systems was meaningless and therefore could not be used to prevent paradoxes, leading him to overlook them.[20]However, Arrow would later describe this as a mistake,[21][22]stating rules based oncardinal utilities(such asscoreandapproval voting) are not subject to his theorem.[23][24] WhenKenneth Arrowproved his theorem in 1950, it inaugurated the modern field ofsocial choice theory, a branch ofwelfare economicsstudying mechanisms to aggregatepreferencesandbeliefsacross a society.[25]Such a mechanism of study can be amarket,voting system,constitution, or even amoralorethicalframework.[1] In the context of Arrow's theorem, citizens are assumed to haveordinal preferences, i.e.orderings of candidates. IfAandBare different candidates or alternatives, thenA≻B{\displaystyle A\succ B}meansAis preferred toB. Individual preferences (or ballots) are required to satisfy intuitive properties of orderings, e.g. they must betransitive—ifA⪰B{\displaystyle A\succeq B}andB⪰C{\displaystyle B\succeq C}, thenA⪰C{\displaystyle A\succeq C}. The social choice function is then amathematical functionthat maps the individual orderings to a new ordering that represents the preferences of all of society. Arrow's theorem assumes as background that anynon-degeneratesocial choice rule will satisfy:[26] Arrow's original statement of the theorem includednon-negative responsivenessas a condition, i.e., thatincreasingthe rank of an outcome should not make themlose—in other words, that a voting rule shouldn't penalize a candidate for being more popular.[2]However, this assumption is not needed or used in his proof (except to derive the weaker condition of Pareto efficiency), and Arrow later corrected his statement of the theorem to remove the inclusion of this condition.[3][29] A commonly-considered axiom ofrational choiceisindependence of irrelevant alternatives(IIA), which says that when deciding betweenAandB, one's opinion about a third optionCshould not affect their decision.[2] IIA is sometimes illustrated with a short joke by philosopherSidney Morgenbesser:[30] Arrow's theorem shows that if a society wishes to make decisions while always avoiding such self-contradictions, it cannot use ranked information alone.[30] Condorcet's exampleis already enough to see the impossibility of a fairranked voting system, given stronger conditions for fairness than Arrow's theorem assumes.[31]Suppose we have three candidates (A{\displaystyle A},B{\displaystyle B}, andC{\displaystyle C}) and three voters whose preferences are as follows: IfC{\displaystyle C}is chosen as the winner, it can be argued any fair voting system would sayB{\displaystyle B}should win instead, since two voters (1 and 2) preferB{\displaystyle B}toC{\displaystyle C}and only one voter (3) prefersC{\displaystyle C}toB{\displaystyle B}. However, by the same argumentA{\displaystyle A}is preferred toB{\displaystyle B}, andC{\displaystyle C}is preferred toA{\displaystyle A}, by a margin of two to one on each occasion. Thus, even though each individual voter has consistent preferences, the preferences of society are contradictory:A{\displaystyle A}is preferred overB{\displaystyle B}which is preferred overC{\displaystyle C}which is preferred overA{\displaystyle A}. Because of this example, some authors creditCondorcetwith having given an intuitive argument that presents the core of Arrow's theorem.[31]However, Arrow's theorem is substantially more general; it applies to methods of making decisions other than one-person-one-vote elections, such asmarketsorweighted voting, based onranked ballots. LetA{\displaystyle A}be a set ofalternatives. A voter'spreferencesoverA{\displaystyle A}are acompleteandtransitivebinary relationonA{\displaystyle A}(sometimes called atotal preorder), that is, a subsetR{\displaystyle R}ofA×A{\displaystyle A\times A}satisfying: The element(a,b){\displaystyle (\mathbf {a} ,\mathbf {b} )}being inR{\displaystyle R}is interpreted to mean that alternativea{\displaystyle \mathbf {a} }is preferred to alternativeb{\displaystyle \mathbf {b} }. This situation is often denoteda≻b{\displaystyle \mathbf {a} \succ \mathbf {b} }oraRb{\displaystyle \mathbf {a} R\mathbf {b} }. Denote the set of all preferences onA{\displaystyle A}byΠ(A){\displaystyle \Pi (A)}. LetN{\displaystyle N}be a positive integer. Anordinal (ranked)social welfare functionis a function[2] which aggregates voters' preferences into a single preference onA{\displaystyle A}. AnN{\displaystyle N}-tuple(R1,…,RN)∈Π(A)N{\displaystyle (R_{1},\ldots ,R_{N})\in \Pi (A)^{N}}of voters' preferences is called apreference profile. Arrow's impossibility theorem: If there are at least three alternatives, then there is no social welfare function satisfying all three of the conditions listed below:[32] Arrow's proof used the concept ofdecisive coalitions.[3] Definition: Our goal is to prove that thedecisive coalitioncontains only one voter, who controls the outcome—in other words, adictator. The following proof is a simplification taken fromAmartya Sen[33]andAriel Rubinstein.[34]The simplified proof uses an additional concept: Thenceforth assume that the social choice system satisfies unrestricted domain, Pareto efficiency, and IIA. Also assume that there are at least 3 distinct outcomes. Field expansion lemma—if a coalitionG{\displaystyle G}is weakly decisive over(x,y){\displaystyle (x,y)}for somex≠y{\displaystyle x\neq y}, then it is decisive. Letz{\displaystyle z}be an outcome distinct fromx,y{\displaystyle x,y}. Claim:G{\displaystyle G}is decisive over(x,z){\displaystyle (x,z)}. Let everyone inG{\displaystyle G}votex{\displaystyle x}overz{\displaystyle z}. By IIA, changing the votes ony{\displaystyle y}does not matter forx,z{\displaystyle x,z}. So change the votes such thatx≻iy≻iz{\displaystyle x\succ _{i}y\succ _{i}z}inG{\displaystyle G}andy≻ix{\displaystyle y\succ _{i}x}andy≻iz{\displaystyle y\succ _{i}z}outside ofG{\displaystyle G}. By Pareto,y≻z{\displaystyle y\succ z}. By coalition weak-decisiveness over(x,y){\displaystyle (x,y)},x≻y{\displaystyle x\succ y}. Thusx≻z{\displaystyle x\succ z}.◻{\displaystyle \square } Similarly,G{\displaystyle G}is decisive over(z,y){\displaystyle (z,y)}. By iterating the above two claims (note that decisiveness implies weak-decisiveness), we find thatG{\displaystyle G}is decisive over all ordered pairs in{x,y,z}{\displaystyle \{x,y,z\}}. Then iterating that, we find thatG{\displaystyle G}is decisive over all ordered pairs inX{\displaystyle X}. Group contraction lemma—If a coalition is decisive, and has size≥2{\displaystyle \geq 2}, then it has a proper subset that is also decisive. LetG{\displaystyle G}be a coalition with size≥2{\displaystyle \geq 2}. Partition the coalition into nonempty subsetsG1,G2{\displaystyle G_{1},G_{2}}. Fix distinctx,y,z{\displaystyle x,y,z}. Design the following voting pattern (notice that it is the cyclic voting pattern which causes the Condorcet paradox): voters inG1:x≻iy≻izvoters inG2:z≻ix≻iyvoters outsideG:y≻iz≻ix{\displaystyle {\begin{aligned}{\text{voters in }}G_{1}&:x\succ _{i}y\succ _{i}z\\{\text{voters in }}G_{2}&:z\succ _{i}x\succ _{i}y\\{\text{voters outside }}G&:y\succ _{i}z\succ _{i}x\end{aligned}}} (Items other thanx,y,z{\displaystyle x,y,z}are not relevant.) SinceG{\displaystyle G}is decisive, we havex≻y{\displaystyle x\succ y}. So at least one is true:x≻z{\displaystyle x\succ z}orz≻y{\displaystyle z\succ y}. Ifx≻z{\displaystyle x\succ z}, thenG1{\displaystyle G_{1}}is weakly decisive over(x,z){\displaystyle (x,z)}. Ifz≻y{\displaystyle z\succ y}, thenG2{\displaystyle G_{2}}is weakly decisive over(z,y){\displaystyle (z,y)}. Now apply the field expansion lemma. By Pareto, the entire set of voters is decisive. Thus by the group contraction lemma, there is a size-one decisive coalition—a dictator. Proofs using the concept of thepivotal voteroriginated from Salvador Barberá in 1980.[35]The proof given here is a simplified version based on two proofs published inEconomic Theory.[32][36] Assume there arenvoters. We assign all of these voters an arbitrary ID number, ranging from1throughn, which we can use to keep track of each voter's identity as we consider what happens when they change their votes.Without loss of generality, we can say there are three candidates who we callA,B, andC. (Because of IIA, including more than 3 candidates does not affect the proof.) We will prove that any social choice rule respecting unanimity and independence of irrelevant alternatives (IIA) is a dictatorship. The proof is in three parts: Consider the situation where everyone prefersAtoB, and everyone also prefersCtoB. By unanimity, society must also prefer bothAandCtoB. Call this situationprofile[0, x]. On the other hand, if everyone preferredBto everything else, then society would have to preferBto everything else by unanimity. Now arrange all the voters in some arbitrary but fixed order, and for eachiletprofile ibe the same asprofile 0, but moveBto the top of the ballots for voters 1 throughi. Soprofile 1hasBat the top of the ballot for voter 1, but not for any of the others.Profile 2hasBat the top for voters 1 and 2, but no others, and so on. SinceBeventually moves to the top of the societal preference as the profile number increases, there must be some profile, numberk, for whichBfirstmovesaboveAin the societal rank. We call the voterkwhose ballot change causes this to happen thepivotal voter forBoverA. Note that the pivotal voter forBoverAis not,a priori, the same as the pivotal voter forAoverB. In part three of the proof we will show that these do turn out to be the same. Also note that by IIA the same argument applies ifprofile 0is any profile in whichAis ranked aboveBby every voter, and the pivotal voter forBoverAwill still be voterk. We will use this observation below. In this part of the argument we refer to voterk, the pivotal voter forBoverA, as thepivotal voterfor simplicity. We will show that the pivotal voter dictates society's decision forBoverC. That is, we show that no matter how the rest of society votes, ifpivotal voterranksBoverC, then that is the societal outcome. Note again that the dictator forBoverCis not a priori the same as that forCoverB. In part three of the proof we will see that these turn out to be the same too. In the following, we call voters 1 throughk − 1,segment one, and votersk + 1throughN,segment two. To begin, suppose that the ballots are as follows: Then by the argument in part one (and the last observation in that part), the societal outcome must rankAaboveB. This is because, except for a repositioning ofC, this profile is the same asprofile k − 1from part one. Furthermore, by unanimity the societal outcome must rankBaboveC. Therefore, we know the outcome in this case completely. Now suppose that pivotal voter movesBaboveA, but keepsCin the same position and imagine that any number (even all!) of the other voters change their ballots to moveBbelowC, without changing the position ofA. Then aside from a repositioning ofCthis is the same asprofile kfrom part one and hence the societal outcome ranksBaboveA. Furthermore, by IIA the societal outcome must rankAaboveC, as in the previous case. In particular, the societal outcome ranksBaboveC, even though Pivotal Voter may have been theonlyvoter to rankBaboveC.ByIIA, this conclusion holds independently of howAis positioned on the ballots, so pivotal voter is a dictator forBoverC. In this part of the argument we refer back to the original ordering of voters, and compare the positions of the different pivotal voters (identified by applying parts one and two to the other pairs of candidates). First, the pivotal voter forBoverCmust appear earlier (or at the same position) in the line than the dictator forBoverC: As we consider the argument of part one applied toBandC, successively movingBto the top of voters' ballots, the pivot point where society ranksBaboveCmust come at or before we reach the dictator forBoverC. Likewise, reversing the roles ofBandC, the pivotal voter forCoverBmust be at or later in line than the dictator forBoverC. In short, ifkX/Ydenotes the position of the pivotal voter forXoverY(for any two candidatesXandY), then we have shown Now repeating the entire argument above withBandCswitched, we also have Therefore, we have and the same argument for other pairs shows that all the pivotal voters (and hence all the dictators) occur at the same position in the list of voters. This voter is the dictator for the whole election. Arrow's impossibility theorem still holds if Pareto efficiency is weakened to the following condition:[4] Arrow's theorem establishes that no ranked voting rule canalwayssatisfy independence of irrelevant alternatives, but it says nothing about the frequency of spoilers. This led Arrow to remark that "Most systems are not going to work badly all of the time. All I proved is that all can work badly at times."[37][38] Attempts at dealing with the effects of Arrow's theorem take one of two approaches: either accepting his rule and searching for the least spoiler-prone methods, or dropping one or more of his assumptions, such as by focusing onrated votingrules.[30] The first set of methods studied by economists are themajority-rule, orCondorcet, methods. These rules limit spoilers to situations where majority rule is self-contradictory, calledCondorcet cycles, and as a result uniquely minimize the possibility of a spoiler effect among ranked rules. (Indeed, many different social welfare functions can meet Arrow's conditions under such restrictions of the domain. It has been proven, however, that under any such restriction, if there exists any social welfare function that adheres to Arrow's criteria, thenCondorcet methodwill adhere to Arrow's criteria.[12]) Condorcet believed voting rules should satisfy both independence of irrelevant alternatives and themajority rule principle, i.e. if most voters rankAliceahead ofBob,Aliceshould defeatBobin the election.[31] Unfortunately, as Condorcet proved, this rule can be intransitive on some preference profiles.[39]Thus, Condorcet proved a weaker form of Arrow's impossibility theorem long before Arrow, under the stronger assumption that a voting system in the two-candidate case will agree with a simple majority vote.[31] Unlike pluralitarian rules such asranked-choice runoff (RCV)orfirst-preference plurality,[9]Condorcet methodsavoid the spoiler effect in non-cyclic elections, where candidates can be chosen by majority rule. Political scientists have found such cycles to be fairly rare, suggesting they may be of limited practical concern.[14]Spatial voting modelsalso suggest such paradoxes are likely to be infrequent[40][13]or even non-existent.[15] Soon after Arrow published his theorem,Duncan Blackshowed his own remarkable result, themedian voter theorem. The theorem proves that if voters and candidates are arranged on aleft-right spectrum, Arrow's conditions are all fully compatible, and all will be met by any rule satisfyingCondorcet's majority-rule principle.[15][16] More formally, Black's theorem assumes preferences aresingle-peaked: a voter's happiness with a candidate goes up and then down as the candidate moves along some spectrum. For example, in a group of friends choosing a volume setting for music, each friend would likely have their own ideal volume; as the volume gets progressively too loud or too quiet, they would be increasingly dissatisfied. If the domain is restricted to profiles where every individual has a single-peaked preference with respect to the linear ordering, then social preferences are acyclic. In this situation, Condorcet methods satisfy a wide variety of highly-desirable properties, including being fully spoilerproof.[15][16][12] The rule does not fully generalize from the political spectrum to the political compass, a result related to theMcKelvey-Schofield chaos theorem.[15][41]However, a well-defined Condorcet winner does exist if thedistributionof voters isrotationally symmetricor otherwise has auniquely-defined median.[42][43]In most realistic situations, where voters' opinions follow a roughly-normal distributionor can be accurately summarized by one or two dimensions, Condorcet cycles are rare (though not unheard of).[40][11] The Campbell-Kelly theorem shows that Condorcet methods are the most spoiler-resistant class of ranked voting systems: whenever it is possible for some ranked voting system to avoid a spoiler effect, a Condorcet method will do so.[12]In other words, replacing a ranked method with its Condorcet variant (i.e. elect a Condorcet winner if they exist, and otherwise run the method) will sometimes prevent a spoiler effect, but can never create a new one.[12] In 1977,Ehud KalaiandEitan Mullergave a full characterization of domain restrictions admitting a nondictatorial andstrategyproofsocial welfare function. These correspond to preferences for which there is a Condorcet winner.[44] Holliday and Pacuit devised a voting system that provably minimizes the number of candidates who are capable of spoiling an election, albeit at the cost of occasionally failingvote positivity(though at a much lower rate than seen ininstant-runoff voting).[11][clarification needed] As shown above, the proof of Arrow's theorem relies crucially on the assumption ofranked voting, and is not applicable torated voting systems. This opens up the possibility of passing all of the criteria given by Arrow. These systems ask voters to rate candidates on a numerical scale (e.g. from 0–10), and then elect the candidate with the highest average (for score voting) or median (graduated majority judgment).[45]: 4–5 Because Arrow's theorem no longer applies, other results are required to determine whether rated methods are immune to thespoiler effect, and under what circumstances. Intuitively, cardinal information can only lead to such immunity if it's meaningful; simply providing cardinal data is not enough.[46] Some rated systems, such asrange votingandmajority judgment, pass independence of irrelevant alternatives when the voters rate the candidates on an absolute scale. However, when they use relative scales, more general impossibility theorems show that the methods (within that context) still fail IIA.[47]As Arrow later suggested, relative ratings may provide more information than pure rankings,[48][49][50][37][51]but this information does not suffice to render the methods immune to spoilers. While Arrow's theorem does not apply to graded systems,Gibbard's theoremstill does: no voting game can bestraightforward(i.e. have a single, clear, always-best strategy).[52] Arrow's framework assumed individual and social preferences areorderingsorrankings, i.e. statements about which outcomes are better or worse than others.[53]Taking inspiration from thestrict behaviorismpopular in psychology, some philosophers and economists rejected the idea of comparing internal human experiences ofwell-being.[54][30]Such philosophers claimed it was impossible to compare the strength of preferences across people who disagreed;Sengives as an example that it would be impossible to know whether theGreat Fire of Romewas good or bad, because despite killing thousands of Romans, it had the positive effect of lettingNeroexpand his palace.[50] Arrow originally agreed with these positions and rejectedcardinal utility, leading him to focus his theorem on preference rankings.[54][3]However, he later stated that cardinal methods can provide additional useful information, and that his theorem is not applicable to them. John Harsanyinoted Arrow's theorem could be considered a weaker version of his own theorem[55][failed verification]and otherutility representation theoremslike theVNM theorem, which generally show thatrational behaviorrequires consistentcardinal utilities.[56] Behavioral economistshave shown individualirrationalityinvolves violations of IIA (e.g. withdecoy effects),[57]suggesting human behavior can cause IIA failures even if the voting method itself does not.[58]However, past research has typically found such effects to be fairly small,[59]and such psychological spoilers can appear regardless of electoral system.BalinskiandLarakidiscuss techniques ofballot designderived frompsychometricsthat minimize these psychological effects, such as asking voters to give each candidate a verbal grade (e.g. "bad", "neutral", "good", "excellent") and issuing instructions to voters that refer to their ballots as judgments of individual candidates.[45][page needed]Similar techniques are often discussed in the context ofcontingent valuation.[51] In addition to the above practical resolutions, there exist unusual (less-than-practical) situations where Arrow's requirement of IIA can be satisfied. Supermajorityrules can avoid Arrow's theorem at the cost of being poorly-decisive (i.e. frequently failing to return a result). In this case, a threshold that requires a2/3{\displaystyle 2/3}majority for ordering 3 outcomes,3/4{\displaystyle 3/4}for 4, etc. does not producevoting paradoxes.[60] Inspatial (n-dimensional ideology) models of voting, this can be relaxed to require only1−e−1{\displaystyle 1-e^{-1}}(roughly 64%) of the vote to prevent cycles, so long as the distribution of voters is well-behaved (quasiconcave).[61]These results provide some justification for the common requirement of a two-thirds majority for constitutional amendments, which is sufficient to prevent cyclic preferences in most situations.[61] Fishburnshows all of Arrow's conditions can be satisfied foruncountably infinite setsof voters given theaxiom of choice;[62]however, Kirman and Sondermann demonstrated this requires disenfranchisingalmost allmembers of a society (eligible voters form a set ofmeasure0), leading them to refer to such societies as "invisible dictatorships".[63] Arrow's theorem is not related tostrategic voting, which does not appear in his framework,[3][1]though the theorem does have important implications for strategic voting (being used as a lemma to proveGibbard's theorem[26]). The Arrovian framework ofsocial welfareassumes all voter preferences are known and the only issue is in aggregating them.[1] Monotonicity(calledpositive associationby Arrow) is not a condition of Arrow's theorem.[3]This misconception is caused by a mistake by Arrow himself, who included the axiom in his original statement of the theorem but did not use it.[2]Dropping the assumption does not allow for constructing a social welfare function that meets his other conditions.[3] Contrary to a common misconception, Arrow's theorem deals with the limited class ofranked-choice voting systems, rather than voting systems as a whole.[1][64] Dr. Arrow:Well, I’m a little inclined to think that score systems where you categorize in maybe three or four classes (in spite of what I said about manipulation) is probably the best.[...] And some of these studies have been made. In France, [Michel] Balinski has done some studies of this kind which seem to give some support to these scoring methods.
https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem
Anidentity document(abbreviated asID) is adocumentproving a person'sidentity. If the identity document is aplastic cardit is called anidentity card(abbreviated asICorID card). When the identity document incorporates a photographicportrait, it is called aphoto ID.[1]In some countries, identity documents may becompulsoryto have. The identity document is used to connect a person to information about the person, often in adatabase. The connection between the identity document and database is based on personal information present on the document, such as the bearer'sfull name,birth date,address, an identification number, card number, gender,citizenshipand more. A uniquenational identification numberis the most secure way, but some countries lack such numbers or do not show them on identity documents. In the absence of an explicit identity document, other documents such asdriver's licensemay be accepted in many countries foridentity verification. Some countries do not accept driver's licenses for identification, often because in those countries they do not expire as documents and can be old or easily forged. Most countries acceptpassportsas a form of identification. Some countries require all people to have an identity document available at all times. Many countries require all foreigners to have a passport or occasionally a national identity card from their home country available at any time if they do not have a residence permit in the country. A version of thepassportconsidered to be the earliest identity document inscribed into law was introduced byKing Henry V of Englandwith theSafe Conducts Act 1414.[2] For the next 500 years up to the onset of theFirst World War, most people did not have or need an identity document. Photographic identification appeared in 1876[3]but it did not become widely used until the early 20th century when photographs became part of passports and other ID documents, all of which came to be referred to as "photo IDs" in the late 20th century. Both Australia and Great Britain, for example, introduced the requirement for a photographic passport in 1915 after the so-calledLody spy scandal.[4] The shape and size of identity cards were standardized in 1985 byISO/IEC 7810. Some modern identity documents aresmart cardsthat include a difficult-to-forge embedded integrated circuit standardized in 1988 byISO/IEC 7816. Newtechnologiesallow identity cards to containbiometricinformation, such as aphotograph,face;hand, orirismeasurements; orfingerprints. Many countries issueelectronic identity cards. Law enforcementofficials claim that identity cards make surveillance and the search for criminals easier and therefore support the universal adoption of identity cards. In countries that do not have a national identity card, there is concern about the projected costs and potential abuse of high-tech smartcards. In many countries – especially English-speaking countries such asAustralia,Canada,Ireland,New Zealand, theUnited Kingdom, and theUnited States– there are no government-issued compulsory identity cards for all citizens. Ireland's Public Services Card is not considered a national identity card by the Department of Employment Affairs and Social Protection (DEASP),[5]but many say it is in fact becoming that, and without public debate or even a legislative foundation.[6] There is debate in these countries about whether such cards and their centralised databases constitute an infringement ofprivacyandcivil liberties. Most criticism is directed towards the possibility of abuse of centralised databases storing sensitive data. A 2006 survey of UKOpen Universitystudents concluded that the planned compulsory identity card under the Identity Cards Act 2006 coupled with acentral governmentdatabasegenerated the most negative response among several options. None of the countries listed above mandate identity documents, but they havede factoequivalents since these countries still require proof of identity in many situations. For example, all vehicle drivers must have a driving licence, and young people may need to use specially issued "proof of age cards" when purchasing alcohol. Arguments for identity documents as such: Arguments for national identity documents: Arguments against identity documents as such: Arguments against national identity documents: Arguments against overuse or abuse of identity documents: According toPrivacy International, as of 1996[update], possession of identity cards was compulsory in about 100 countries, though what constitutes "compulsory" varies. In some countries, it is compulsory to have an identity card when a person reaches a prescribed age. The penalty for non-possession is usually a fine, but in some cases it may result indetentionuntil identity is established. For people suspected with crimes such as shoplifting orfare evasion, non-possession might result in such detention, also in countries not formally requiring identity cards. In practice, random checks are rare, except in certain situations. A handful of countries do not issue identity cards. These includeAndorra,[14]Australia, theBahamas,[15]Canada,Nauru,New Zealand,Samoa,Tuvaluand theUnited Kingdom.[16]Other identity documents such as passports or driver's licenses are then used as identity documents when needed. However, governments of the Bahamas and Samoa are planning to introduce new national identity cards in the near future[17][18]Some countries, like Denmark, have more simple official identity cards, which do not match the security and level of acceptance of a national identity card, and which are used by people without driver's licenses. A number of countries have voluntary identity card schemes. These include Austria, Belize, Finland, France (seeFrance section), Hungary (however, all citizens of Hungary must have at least one of: valid passport, photo-based driving licence, or the National ID card), Iceland, Ireland, Norway, Saint Lucia, Sweden, Switzerland and the United States. The United Kingdom'sschemewas scrapped in January 2011 and the database was destroyed. In the United States, the Federal government issues optional non-obligatory identity cards known as "Passport Cards" (which include important information such as the nationality). On the other hand, states issue optional identity cards for people who do not hold a driver's license as an alternate means of identification. These cards are issued by the same organisation responsible for driver's licenses, usually called theDepartment of Motor Vehicles. Passport Cards hold limited travel status or provision, usually for domestic travel. For theSahrawi peopleofWestern Sahara, pre-1975 Spanish identity cards are the main proof that they were Saharawi citizens as opposed to recent Moroccan settlers. They would thus be allowed to vote in an eventualself-determinationreferendum. Companies and government departments may issue ID cards for security purposes, proof ofidentity, or also as proof of aqualification(without proving identity). For example, alltaxicab driversin the UK carry ID cards. Managers, supervisors, and operatives in construction in the UK can get a photographic ID[19]card, the CSCS (Construction Skills Certification Scheme) card, indicating training and skills including safety training. The card is not an identity card or a legal requirement, but enables holders to prove competence without having to provide all the pertinent documents. Those working on UK railway lands near working lines must carry a photographic ID card to indicate training in track safety (PTS and other cards) possession of which is dependent on periodic and random alcohol anddrug screening. InQueenslandandWestern Australia, anyone working with children has to take abackground checkand get issued aBlue Cardor Working with Children Card, respectively. Cartão Nacional de Identificação (CNI) is the national identity card ofCape Verde. It is compulsory for all Egyptian citizens age 16 or older to possess an ID card[20](Arabic:بطاقة تحقيق شخصيةBiṭāqat taḥqīq shakhṣiyya, literally, "Personal Verification Card").[citation needed]Indaily colloquial speech, it is generally simply called "el-biṭāqa" ("the card"). It is used for:[citation needed] Egyptian ID cards consist of 14 digits, the national identity number, and expire after 7 years from the date of issue. Some feel that Egyptian ID cards are problematic, due to the general poor quality of card holders' photographs and the compulsory requirements for ID card holders to identify their religion and for married women to include their husband's name on their cards.[citation needed] AllGambian citizensover 18 years of age are required to hold a Gambian National Identity Card.[citation needed]In July 2009, a newbiometricidentity card was introduced.[citation needed]The biometric card is one of the acceptable documents required to apply for a Gambian Driving Licence.[citation needed] Ghanabegun the issuing of a national identity card for Ghanaiancitizensin 1973.[21]However, the project was discontinued three years later due to problems with logistics and lack of financial support. This was the first time the idea of national identification systems in the form of theGhana Cardarose in the country.[21]Full implementation of the Ghana Cards begun from 2006.[22]According to theNational Identification Authority, over 15 million Ghanaians have been registered for the Ghana card by September 2020.[23] Liberia has begun the issuance process of its national biometric identification card, which citizens and foreign residents will use to open bank accounts and participate in other government services on a daily basis. More than 4.5 million people are expected to register and obtain ID cards of citizenship or residence in Liberia. The project has already started where NIR (National Identification Registry) is issuing Citizen National ID Cards. The centralized National Biometric Identification System (NBIS) will be integrated with other government ministries. Resident ID Cards and ECOWAS ID Cards will also be issued.[24] Mauritius requires all citizens who have reached the age of 18 to apply for a National Identity Card. The National Identity Card is one of the few accepted forms of identification, along with passports. A National Identity Card is needed to apply for a passport for all adults, and all minors must take with them the National Identity Card of a parent(s) when applying for a passport.[25] Bilhete de identidade (BI) is the national ID card ofMozambique. Nigeria first introduced a national identity card in 2005, but its adoption back then was limited and not widespread. The country is now in the process of introducing a new biometric ID card complete with a SmartCard and other security features. The National Identity Management Commission (NIMC)[26]is the federal government agency responsible for the issuance of these new cards, as well as the management of the new National Identity Database. The Federal Government of Nigeria announced in April 2013[27]that after the next general election in 2015, all subsequent elections will require that voters will only be eligible to stand for office or vote provided the citizen possesses a NIMC-issued identity card. The Central Bank of Nigeria is also looking into instructing banks to request for a National Identity Number (NIN) for any citizen maintaining an account with any of the banks operating in Nigeria. The proposed kick off date is yet to be determined. South African citizens aged 15 years and 6 months or older are eligible for an ID card. The South African identity document is not valid as a travel document or valid for use outside South Africa. Although carrying the document is not required in daily life, it is necessary to show the document or a certified copy as proof of identity when: The South African identity document used to also contain driving andfirearms licences; however, these documents are now issued separately in card format. In mid 2013 a smart card ID was launched to replace the ID book. The cards were launched on July 18, 2013, when a number of dignitaries received the first cards at a ceremony in Pretoria.[28]The government plans to have the ID books phased out over a six to eight-year period.[29]The South African government is looking into possibly using this smart card not just as an identification card but also for licences,National Health Insurance, and social grants.[30] Every citizen of Tunisia is expected to apply for an ID card by the age of 18; however, with the approval of a parent(s), a Tunisian citizen may apply for, and receive, an ID card prior to their eighteenth birthday upon parental request.[citation needed] In 2016, The government has introduced a new bill to the parliament to issue new biometric ID documents. The bill has created controversy amid civil society organizations.[31] Zimbabweansare required to apply for National Registration at the age of 16.[citation needed]Zimbabwean citizens are issued with aplastic cardwhich contains a photograph and their particulars onto it. Before the introduction of the plastic card, the Zimbabwean ID card used to be printed on anodised aluminium. Along with Driving Licences, the National Registration Card (including the old metal type) is universally accepted as proof of identity in Zimbabwe. Zimbabweans are required by law to carry identification on them at all times and visitors to Zimbabwe are expected to carry their passport with them at all times.[citation needed] Afghan citizens over the age of 18 are required to carry a national ID document calledTazkira. Bahraini citizens must have both an ID card, called a "smart card", which is recognized as an official document and can be used within theGulf Cooperation Council, and a passport, which is recognized worldwide.[citation needed] Biometric identification has existed inBangladeshsince 2008. All Bangladeshis who are 18 years of age and older are included in a central Biometric Database, which is used by the Bangladesh Election Commission to oversee the electoral procedure in Bangladesh. All Bangladeshis are issued with anNID Cardwhich can be used to obtain a passport, Driving Licence, credit card, and to register land ownership. The Bhutanese national identity card (called the Buthanese Citizenship card) is an electronic ID card, compulsory for all Bhutanese nationals and costs 100 Bhutanese ngultrum. The People's Republic of China requires each of its citizens aged 16 and over to carry an identity card. The card is the only acceptable legal document to obtain employment, a residence permit, driving licence or passport, and to open bank accounts or apply for entry to tertiary education and technical colleges. TheHong Kong Identity Card(orHKID) is an official identity document issued by theImmigration Department of Hong Kongto all people who hold the right of abode, right to land or other forms of limited stay longer than 180 days in Hong Kong. According toBasic Law of Hong Kong, all permanent residents are eligible to obtain theHong Kong Permanent Identity Cardwhich states that the holder has theright of abode in Hong Kong. All persons aged 16 and above must carry a valid legal government identification document in public. All persons aged 16 and above must be able to produce valid legal government identification documents when requested by legal authorities; otherwise, they may be held in detention to investigate his or her identity and legal right to be in Hong Kong. While there is no mandatory identity card in India, theAadhaarcard, a multi-purpose national identity card, carrying 16 personal details and a unique identification number, has been available to all citizens since 2009.[32]The card contains a photograph, full name, date of birth, and a unique, randomly generated 12-digitNational Identification Number. However, the card itself is rarely required as proof, the number or a copy of the card being sufficient. The card has a SCOSTA QR code embedded on the card, through which all the details on the card are accessible.[33]In addition to Aadhaar,PANcards,ration cards, voter cards and driving licences are also used. These may be issued by either the government of India or the government of any state and are valid throughout the nation. The Indian passport may also be used. Residents over 17 are required to hold a KTP (Kartu Tanda Penduduk) identity card. The card will identify whether the holder is anIndonesian citizenorforeign national. In 2011, the Indonesian government started a two-year ID issuance campaign that utilizes smartcard technology and biometric duplication of fingerprint andiris recognition. This card, called the Electronic KTP (e-KTP), will replace the conventional ID card beginning in 2013. By 2013, it is estimated that approximately 172 million Indonesian nationals will have an e-KTP issued to them. Every citizen of Iran has an identification document calledShenasnameh(Iranian identity booklet) inPersian(شناسنامه). This is a booklet based on the citizen's birth certificate which features their Shenasnameh National ID number,given name,surname, their birth date, their birthplace, and the names, birth dates and National ID numbers of their legal ascendants. In other pages of the Shenasnameh, their marriage status, names of spouse(s), names of children, date of every vote cast and eventually their death would be recorded.[34] Every Iranian permanent resident above the age of 15 must hold a validNational Identity Card(Persian:کارت ملی) or at least obtain their unique National Number from any of the local Vital Records branches of the IranianMinistry of Interior.[35] In order to apply for an NID card, the applicant must be at least 15 years old and have a photograph attached to theirBirth Certificate, which is undertaken by the Vital Records branch. Since June 21, 2008, NID cards have been compulsory for many things in Iran and Iranian missions abroad (e.g., obtaining a passport, driver's license, any banking procedure, etc.).[36] EveryIraqicitizen must have aNational Card(البطاقة الوطنية). Israeli law requires every permanent resident above the age of 16, whether a citizen or not, to carry an identification card calledte'udat zehut(Hebrew:תעודת זהות) inHebreworbiţāqat huwīya(بطاقة هوية) inArabic. The card is designed in a bilingual form, printed inHebrewandArabic; however, the personal data is presented in Hebrew by default and may be presented in Arabic as well if the owner decides so. The card must be presented to an official on duty (e.g., a policeman) upon request, but if the resident is unable to do this, one may contact the relevant authority within five days to avoid a penalty. Until the mid-1990s, the identification card was considered the only legally reliable document for many actions such as voting or opening a bank account. Since then, the new Israeli driver's licenses which include photos and extra personal information are now considered equally reliable for most of these transactions. In other situations, any government-issued photo ID, such as a passport or a military ID, may suffice. Japanese citizens are not required to have identification documents with them within the territory of Japan. When necessary, official documents, such as one'sJapanese driver's license,individual number card, basic resident registration card,[37]radio operator license,[38]social insurance card, health insurance card or passport are generally used and accepted. On the other hand, mid- to long-term foreign residents are required to carry theirZairyū cards,[39]while short-term visitors and tourists (those with a Temporary Visitor status sticker in their passport) are required to carry theirpassports. Since 1994, Kazakhstan has issued a compulsory identity card (Kazakh:Jeke kuälık), with a validity of 10 years, for all its citizens over the age of 16.[40]In order to receive an ID card, a Kazakh citizen must apply to NJSC State Corporation "Government for Citizens" at their permanent or temporary place of residence.[41] Currently, there's no legislation in requiring persons in Kazakhstan to carry their ID cards in public.[42]In addition, the ID card documents could be stored digitally in mobile phones due to an eGov app launched in November 2019.[43] The Kuwaiti identity card is issued to Kuwaiti citizens. It can be used as atravel documentwhen visiting countries in theGulf Cooperation Council. The first post-Soviet Kyrgyz identity document was regulated in a government resolution No. 775 of October 17, 1994 "on the approval of the regulations on the passport system of the Kyrgyz Republic" which included a sample passport and its description.[44] According to the Resolution of the Government of the Kyrgyz Republic dated November 18, 2016 No. 598, 1994 passports with a mark on the extension of the validity period "indefinitely" from April 1, 2017 completely lost their legal force and were recognized as invalid.[45] TheMacau Resident Identity Cardis an official identity document issued by the Identification Department to permanent residents and non-permanent residents. In Malaysia, theMyKadis the compulsory identity document forMalaysiancitizens aged 12 and above. Introduced by theNational Registration Department of Malaysiaon September 5, 2001, as one of fourMSC Malaysiaflagship applications[46]and a replacement for theHigh Quality Identity Card(Kad Pengenalan Bermutu Tinggi), Malaysia became the first country in the world to use an identification card that incorporates both photo identification andfingerprintbiometricdata on an in-built computer chip embedded in a piece of plastic.[47] Myanmar citizens are required to obtain a National Registration Card (NRC), while non-citizens are given a Foreign Registration Card (FRC). New biometric cards rolled out in 2018. Information displayed in both English and Nepali.[48][49] InPakistan, all adult citizens must register for the Computerized National Identity Card (CNIC), with a unique number, at age 18. CNIC serves as an identification document to authenticate an individual's identity as a citizen ofPakistan. Earlier on, National Identity Cards (NICs) were issued to citizens of Pakistan. Now, the government has shifted all its existing records of National Identity Cards (NIC) to the central computerized database managed byNADRA. New CNIC's are machine readable and have security features such as facial and fingerprint information. At the end of 2013, smart national identity cards, SNICs, were also made available. ThePalestinian Authorityissues identification cards following agreements with Israel. Since 1995, in accordance to theOslo Accords, the data is forwarded to Israeli databases and verified.[citation needed]In February 2014, a presidential decision issued by Palestinian presidentMahmoud Abbasto abolish the religion field was announced.[50]Israel has objected to abolishing religion on Palestinian IDs because it controls their official records, IDs and passports and the PA does not have the right to make amendments to this effect without the prior approval of Israel. The Palestinian Authority inRamallahsaid that abolishing religion on the ID has been at the center of negotiations with Israel since 1995. The decision was criticized byHamasofficials inGaza Strip, saying it is unconstitutional and will not be implemented in Gaza because it undermines the Palestinian cause.[51] A new Philippines identity card known as the Philippine Identification System (PhilSys) ID card began to be issued in August 2018 to Filipino citizens and foreign residents aged 18 and above. This national ID card is non-compulsory but should harmonize existing government-initiated identification cards that have been issued – including the Unified Multi-Purpose ID issued to members of theSocial Security System,Government Service Insurance System,Philippine Health Insurance Corporationand theHome Development Mutual Fund(Pag-IBIG Fund). InSingapore, every citizen, and permanent resident (PR) must register at the age of 15 for an Identity Card (IC). The card is necessary not only for procedures of state but also in the day-to-day transactions such as registering for a mobile phone line, obtaining certain discounts at stores, and logging on to certain websites on the internet. Schools frequently use it to identify students, both online and in exams.[52] Every citizen of South Korea over the age of 17 is issued an ID card calledJumindeungrokjeung(주민등록증). It has had several changes in its history, the most recent form being a plastic card meeting the ISO 7810 standard. The card has the holder's photo and a 15-digit ID number calculated from the holder's birthday and birthplace. A hologram is applied for the purpose of hampering forgery. This card has no additional features used to identify the holder, save the photo. Other than this card, the South Korean government accepts a Korean driver's license card, an Alien Registration Card, a passport and a public officer ID card as an official ID card. The E-National Identity Card(abbreviation: E-NIC) is the identity document in use inSri Lanka. It is compulsory for all Sri Lankan citizens who are sixteen years of age and older to have a NIC. NICs are issued from the Department for Registration of Persons. The Registration of Persons Act No.32 of 1968 as amended by Act Nos 28 and 37 of 1971 and Act No.11 of 1981 legislates the issuance and usage of NICs. Sri Lankais in the process of developing aSmart CardbasedRFIDNIC card which will replace the obsolete 'laminated type' cards by storing the holders information on a chip that can be read by banks, offices, etc., thereby reducing the need to have documentation of these data physically by storing in thecloud. The NIC number is used for unique personal identification, similar to thesocial security numberin the US. InSri Lanka, all citizens over the age of 16 need to apply for aNational Identity Card(NIC). Each NIC has a unique 10-digit number, in the format 000000000A (where 0 is a digit and A is a letter). The first two digits of the number are your year of birth (e.g.: 93xxxxxxxx for someone born in 1993). The final letter is generally a 'V' or 'X'. An NIC number is required to apply for a passport (over 16), driving license (over 18) and to vote (over 18). In addition, all citizens are required to carry their NIC on them at all times as proof of identity, given the security situation in the country.[citation needed]NICs are not issued to non-citizens, who are still required to carry a form of photo identification (such as a photocopy of their passport or foreign driving license) at all times. At times the Postal ID card may also be used. The "National Identification Card" (Chinese:國民身分證) is issued to all nationals of theRepublic of China (Official name of Taiwan)aged 14 and older who havehousehold registrationin theTaiwan area. The Identification Card is used for virtually all activities that require identity verification within Taiwan such as opening bank accounts, renting apartments,employment applicationsand voting. The Identification Card contains the holder's photo,ID number,Chinese name, and (Minguo calendar) date of birth. The back of the card also contains the person's registered address where official correspondence is sent, place of birth, and the name of legal ascendants and spouse (if any). If residents move, they must re-register at a municipal office (Chinese:戶政事務所). ROC nationals with household registration in Taiwan are known as "registered nationals". ROC nationals who do not have household registration in Taiwan (known as "unregistered nationals") do not qualify for the Identification Card and its associated privileges (e.g., the right to vote and the right of abode in Taiwan), but qualify for theRepublic of China passport, which unlike the Identification Card, is not indicative of residency rights in Taiwan. If such "unregistered nationals" are residents of Taiwan, they will hold aTaiwan Area Resident Certificateas an identity document, which is nearly identical to the Alien Resident Certificate issued to foreign nationals/citizens residing in Taiwan. In 1994, the first post-Soviet Tajikinternal passportappeared, which was filled in manually. Neither the number nor the series of the document were printed on the inner pages, and the owner's photo was easy to re-stick. This document turned out to be the most convenient for falsification, hence such passports were in great demand among citizens of neighbouring countries hiding from justice.[45] InThailand, the Thai National ID Card (Thai: บัตรประจำตัวประชาชน; RTGS: bat pracham tua pracha chon) is an official identity document issued only to Thai Nationals. The card proves the holder's identity for receiving government services and other entitlements. Following thedissolutionof theSoviet Unionand theestablishment of independent Turkmenistan, blankpassports of citizens of the USSRof the 1974 model and foreign passports of citizens of the USSR were used in Turkmenistan both as internal identity document andpassport, in which the stamp “Citizen of Turkmenistan” was placed. The unified national passport system was introduced in Turkmenistan on October 25, 1996 by the Decree of the President "On Approval of the Regulations on the Passport System in Turkmenistan".[53]According to the approved regulations, the exchange and issuance of national passports of a citizen of Turkmenistan was to be carried out in the period from October 25, 1996 to December 31, 2001. The new document kept its dual-purpose role as internal identity document and passport. Following the introduction of a Turkmen biometric passport in July 2008 to be used astravel document, a separateinternal passportwas issued. The Federal Authority For Identity and Citizenship is a government agency that is responsible for issuing the National Identity Cards for the citizens (UAE nationals), GCC (Gulf Corporation Council) nationals and residents in the country. All individuals are mandated to apply for the ID card at all ages. For individuals of 15 years and above, fingerprint biometrics (10 fingerprints, palm, and writer) are captured in the registration process. Each person has a unique 15-digit identification number (IDN) that a person holds throughout his/her life. The Identity Card is a smart card that has a state-of-art technology in the smart cards field with very high security features which make it difficult to duplicate. It is a 144KB Combi Smart Card, where the electronic chip includes personal information, 2 fingerprints, 4-digit pin code, digital signature, and certificates (digital and encryption). Personal photo, IDN, name, date of birth, signature, nationality, and the ID card expiry date are fields visible on the physical card. In theUAEit is used as an official identification document for all individuals to benefit from services in the government, some of the non-government, and private entities in the UAE. This supports the UAE's vision of smart government as the ID card is used to securely access e-services in the country. The ID card could also be used by citizens as an official travel document between GCC countries instead of using passports. The implementation of the national ID program in the UAE enhanced security of the individuals by protecting their identities and preventingidentity theft.[54] Following thedissolutionof theSoviet Union,Uzbek passportwas also used as an internal identity document. In September 2020,President of UzbekistanShavkat Mirziyoyevsigned a decree "On measures to introduce ID cards in the Republic of Uzbekistan". According to the document, from January 1, 2021, a unified personal identification system will be introduced in the country, which provides for a gradual replacement of biometric passports with ID cards with an electronic data carrier (chip) by 2030. This will also allow citizens to use government services. It is expected that the document processing period will be 1 day[55]. Until December 31, 2022, ID cards was issued voluntarily to persons who have reached the age of 16, as well as in case of loss of a passport, desire to change the full name, nationality and for other reasons specified in the legislation. From January 1, 2023 to December 31, 2030, the exchange of biometric passports for ID cards is mandatory as they expire. In Vietnam, all citizens above 14 years old must possess anIdentification cardprovided by the local authority, and must be reissued when the citizens' years of age reach 25, 40 and 60. Children from 6 to under 14 years old can request if needed. Formerly apeople's ID documentwas used.[56] National identity cards issued to the citizens of theEuropean UnionandEuropean Free Trade Association(Iceland,Liechtenstein,Norway, andSwitzerland) that state the bearer's citizenship as belonging to an EU/EFTA member can be used as identity documents within the home country, and astravel documentsto exercise theright of free movementin the EU or EFTA.[57][58][59] During theUK Presidency of the EU in 2005a decision was made to: "Agree common standards for security features and secure issuing procedures for ID cards (December 2005), with detailed standards agreed as soon as possible thereafter. In this respect, the UK Presidency put forward a proposal for the EU-wide use of biometrics in national identity cards".[60] From August 2, 2021, the European identity card[61][62]is intended to replace and standardize the various identity card styles currently in use.[a][64][65] The Austrian identity card is issued to Austrian citizens. It can be used as a travel document when visiting countries in the EU/EFTA, Albania, Andorra, Bosnia and Herzegovina, Georgia, Kosovo, Moldova, Monaco, Montenegro, North Macedonia, San Marino, Serbia, Vatican City, the French overseas territories and the British Crown Possessions, as well as on organized tours to Jordan (through Aqaba airport) and Tunisia. Only around 10% of the citizens of Austria had this card in 2012, as they can use the Austrian driver's licenses or other identity cards domestically and the more widely accepted Austrian passport abroad. InBelgium, everyone above the age of 12 is issued an identity card (carte d'identitéin French,identiteitskaartin Dutch andPersonalausweisin German), and from the age of 15 carrying this card at all times is mandatory. For foreigners residing in Belgium, similar cards (foreigner's cards,vreemdelingenkaartin Dutch,carte pour étrangersin French) are issued, although they may also carry a passport, a work permit, or a (temporary) residence permit. Since 2000, all newly issued Belgian identity cards have a chip (eID card), and roll-out of these cards is expected to be complete in the course of 2009. Since 2008, the aforementioned foreigner's card has also been replaced by an eID card, containing a similar chip. The eID cards can be used both in the public and private sector for identification and for the creation of legally binding electronic signatures. Until end 2010 Belgian consulates issued old style ID cards (105 x 75 mm) to Belgian citizens who were permanently residing in their jurisdiction and who chose to be registered at the consulate (which is strongly advised). Since 2011 Belgian consulates issue electronic ID cards, the electronic chip on which is not activated however. InBulgaria, it is obligatory to possess an identity card (Bulgarian – лична карта, lichna karta) at the age of 14 and above. Any person above 14 being checked by the police without carrying at least some form of identification is liable to a fine of 50 Bulgarian levs (about €25). All Croatian citizens may request an Identity Card, calledOsobna iskaznica(literally Personal card). All persons over the age of 18 must have an Identity Card and carry it at all times. Refusal to carry or produce an Identity Card to a police officer can lead to a fine of 100kunaor more[needs update]and detention until the individual's identity can be verified by fingerprints. The Croatian ID card is valid in the entire European Union, and can also be used to travel throughout the non-EU countries of the Balkans. The 2013 design of the Croatian ID card is prepared for future installation of anelectronic identity cardchip, which is set for implementation in 2014.[66] The acquisition and possession of a Civil Identity Card is compulsory for any eligible person who has reached twelve years of age. On January 29, 2015, it was announced that all future IDs to be issued will be biometric.[67]They can be applied for at Citizen Service Centres (KEP) or at consulates with biometric data capturing facilities. An ID card costs €30 for adults and €20 for children with 10/5 years validity respectively. It is a valid travel document for the entire European Union. InCzech, an ID is calledObčanský průkaz, an identity card with a photo is issued to all citizens of theCzech Republicat the age of 15. It is officially recognised by all member states of theEuropean Unionfor intra EU travel. Travelling outside the EU mostly requires theCzech passport. Denmark is the onlyEU/EEAcountry that does not issueEU standardnational identity cards or travel documents in a card format. The most common identity documents in Denmark are driving licences and passports, containing both thepersonal identification numberand a photo. Identity documents are not mandatory in Denmark. For those who do not have a passport or driving licence, Danish identification cards (Danish:legitimationskort) are issued by municipalities. Each municipality has their own design and they are not accepted as valid travel documents outside Denmark. They were launched in 2017, replacing previous 'Youth Cards'.[68]Since 2018, information about the nationality of the cardholder has been included which briefly allowed the card to be used for travel to Sweden.[69]However in September 2019, Swedish authorities explicitly banned Danish municipal identity cards from being used for entry for security reasons.[70]In 2021, the Danish Ministry of Interior came to the conclusion that more secure ID cards were not on the agenda due to prohibitive costs.[71] Previously,Personal identification number certificates(Danish:Personnummerbevis)were optionally issued in Denmark but have been largely replaced byNational Health Insurance Card(Danish:Sundhedskortet) which contains the same information and health insurance information. The National Health Insurance Card is issued to all health insured residents in Denmark. It was commonly used as ade factoidentity document despite the fact it has no photo of the holder. Until 2004, the national debit cardDankortcontained a photo of the holder and was widely accepted as identification until Danish banks lobbied successfully to have pictures removed from debit cards. Between 2004 and 2016, municipalities issued a "photo identity card" or "youth cards" (Danish:billedlegitimationskort), but it was limited to proof of age verification. The Estonian identity card (Estonian:ID-kaart) is achippedpicture ID in theRepublic of Estonia. An Estonian identity card is officially recognised by all member states of theEuropean Unionfor intra EU travel. For travelling outside the EU, Estonian citizens may also require apassport. The card's chip stores akey pair, allowing users to cryptographically sign digital documents based on principles ofpublic key cryptographyusingDigiDoc. Under Estonian law, since December 15, 2000 the cryptographic signature is legally equivalent to a manualsignature. The Estonian identity card is also used for authentication in Estonia's ambitiousInternet-based votingprogramme. In February 2007, Estonia was the first country in the world to institute electronic voting for parliamentary elections. Over 30 000 voters participated in the country's first e-election. By 2014, at the European Parliament elections, the number of e-voters has increased to more than 100,000 comprising 31% of the total votes cast.[72] In Finland, any citizen can get an identification card (henkilökortti/identitetskort). This, along with the passport, is one of two official identity documents. It is available as an electronic ID card (sähköinen henkilökortti/elektroniskt identitetskort), which enables logging into certain government services on the Internet. Driving licenses andKELA(social security) cards with a photo are also widely used for general identification purposes even though they are not officially recognized as such. However, KELA has ended the practice of issuing social security cards with the photograph of the bearer, while it has become possible to embed the social security information onto the national ID card. For most purposes when identification is required, only valid documents are ID card, passport or driving license. However, a citizen is not required to carry any of these. France has had a national ID card for all citizens since the beginning ofWorld War IIin 1940. Compulsory identity documents were created before, for workers from 1803 to 1890, nomads (gens du voyage) in 1912, and foreigners in 1917 during World War I. National identity cards were first issued as thecarte d'identité françaiseunder the law of October 27, 1940, and were compulsory for everyone over the age of 16. Identity cards were valid for 10 years, had to be updated within a year in case of change of residence, and their renewal required paying a fee. Under theVichy regime, in addition to the face photograph, the family name, first names, date and place of birth, the card included the national identity number managed by the national statisticsINSEE, which is also used as the national service registration number, as the Social Security account number for health and retirement benefits, for access to court files and for tax purposes. Under the decree 55-1397 of October 22, 1955[73][74]a revised non-compulsory card, thecarte nationale d'identité(CNI) was introduced. The law (Art. 78–1 to 78–6 of theFrench code of criminal procedure(Code de procédure pénale)[75]mentions only that during an ID check performed by police, gendarmerie or customs officer, one can prove his identity "by any means", the validity of which is left to the judgment of the law enforcement official. Though not stated explicitly in the law, an ID card, a driving licence, a passport, a visa, aCarte de Séjour, avoting cardare sufficient according to jurisprudence. The decision to accept other documents, with or without the bearer's photograph, like aSocial Security card, atravel cardor abank card, is left to the discretion of the law enforcement officer. According to Art. 78-2 of theFrench Code of Criminal Procedure, ID checks are only possible:[76] The last case allows checks of passers-by ID by the police, especially in neighborhoods with a higher criminality rate which are often the poorest at the condition, according to theCour de cassation, that the policeman does not refer only to "general and abstract conditions" but to "particular circumstances able to characterise a risk of breach of public order and in particular an offence against the safety of persons or property" (Cass. crim. December 5, 1999, n°99-81153, Bull., n°95). In case of necessity to establish your identity, not being able to prove it "by any means" (for example the legality of a road trafficprocès-verbaldepends on it), may lead to a temporary arrest (vérification d'identité) of up to 4 hours for the time strictly required for ascertaining your identity according to art. 78-3 of the French Code of criminal procedure (Code de procédure pénale).[75] For financial transactions, ID cards and passports are almost always accepted as proof of identity. Due to possibleforgery, driver's licenses are sometimes refused. For transactions by cheque involving a larger sum, two different ID documents are frequently requested by merchants. The current identification cards are now issued free of charge and optional, and are valid for ten years for minors, and fifteen for adults.[80]The current government has proposed a compulsory biometric card system, which has been opposed by human rights groups and by the national authority and regulator on computing systems and databases, theCommission nationale de l'informatique et des libertés,CNIL. Another non-compulsory project is being discussed. It is compulsory for allGermancitizens aged 16 or older to possess either aPersonalausweis(identity card) or a passport but not to carry one. Police officers and other officials have a right to demand to see one of those documents (obligation of identification); however, the law does not state that one is obliged to submit the document at that very moment. But asdriver's licences, although sometimes accepted, are not legally accepted forms of identification in Germany, people usually choose to carry theirPersonalausweiswith them. Beginning from November 2010, German ID cards are issued in the ID-1 format and contain an integrated digital signature. The cards have a photograph and a chip with biometric data, including two now mandatory fingerprints. Until October 2010, German ID cards were issued inISO/IEC 7810ID-2 format. On November 1, 2019, German ID cards underwent minor textual adjustments concerning the information field on the surname and surname at birth. On August 2, 2021, German ID cards were adapted to Regulation (EU) 2019/1157. The changes include the country code "DE" being shown in white in the blueEuropean flagon the front and two fingerprints (as an encrypted image file) becoming mandatory. In addition, the version number was added to the machine-readable zone. On May 2, 2024, the doctor's title was moved to the back side of the identity card. A compulsory, universal ID system based on personal ID cards has been in place in Greece sinceWorld War II. ID cards are issued by the police on behalf of the ministry responsible for the Headquarters of theHellenic Police(Ministry of Public OrderorMinistry of Citizen ProtectionorMinistry of the Interiorat times) and display the holder's signature, standardized face photograph, name and surname, legal ascendants name and surname, date and place of birth, height, municipality, and the issuing police precinct. There are also two optional fields designed to facilitate emergency medical care:ABOandRhesus factorblood typing. Fields included in previous ID card formats, such as vocation or profession, religious denomination, domiciliary address, name and surname of spouse, fingerprint, eye and hair color, citizenship and ethnicity were removed permanently as being intrusive of personal data and/or superfluous for the sole purpose of personal identification. Since 2000, name fields have been filled in bothGreekandLatincharacters. According to the Signpost Service of the European Commission [reply to Enquiry 36581], old type Greek ID cards "are as valid as the new type according to Greek law and thus they constitute valid travel documents that all other EU Member States are obliged to accept". In addition to being equivalent to passports within the EU and EFTA, Greek ID cards are the principal means of identification of voters during elections. Since 2005, the procedure to issue an ID card has been automated and now all citizens over 12 years of age must have an ID card, which is issued within one work day.[citation needed]Prior to that date, the age of compulsory issue was at 14 and the whole procedure could last several months. In Greece, an ID card is a citizen's most important state document[original research?]. For instance, it is required to perform banking transactions if the teller personnel is unfamiliar with the apparent account holder, to interact with the Citizen Service Bureaus (KEP),[81]receive parcels or registered mail etc. Citizens are also required to produce their ID card at the request of law enforcement personnel. All the above functions can be fulfilled also with a valid Greek passport (e.g., for people who have lost their ID card and have not yet applied for a new one, people who happen to carry their passport instead of their ID card or Greeks who reside abroad and do not have an identity card, which can be issued only in Greece in contrast to passports also issued by consular authorities abroad). Currently, there are three types of valid ID documents (Személyazonosító igazolvány, néeSzemélyi igazolvány, abbr.Sz.ig.) in Hungary: the oldest valid ones are hard-covered, multi-page booklets and issued before 1989 by the People's Republic of Hungary, the second type is a soft-cover, multi-page booklet issued after the change of regime; these two have one, original photo of the owner embedded, with original signatures of the owner and the local police's representative. The third type is a plastic card with the photo and the signature of the holder digitally reproduced. These are generally called Personal Identity Card. The plastic card shows the owners full name, maiden name if applicable, birth date and place, mother's maiden name, the cardholder's gender, the ID's validity period and the local state authority which issued the card. The card has a 6 digit number + 2 letter unique ID and a separate machine readable zone on the back for identity document scanning devices. It does not have any information about the owner's residential address, nor their personal identity number – this sensitive information is contained on a separate card, called a Residency Card (Lakcímkártya). Personal identity numbers have been issued since 1975; they have the following format in numbers: gender (1 number) – birth date (6 numbers) – unique ID (4 numbers). They are no longer used as a personal identification number, but as a statistical signature. Other valid documents are the passport (blue colored or red colored withRFIDchip) and thedriver's license; an individual is required to have at least one of them on hand all the time. The Personal Identity Card is mandatory to vote in state elections or open a bank account in the country. ID cards are issued to permanent residents of Hungary; the card has a different color for foreign citizens.[citation needed] Icelandic state-issued identity cards are called "Nafnskírteini" (lit.'name certificate').[82]The ID cards are voluntary, conforming to biometricICAOandEU standardsand can be used as a travel document in theEU/EFTAand theNordic countries.[83]Identity documents are not mandatory to carry or own by law (unless driving a car), but can be needed for bank services, age verification and other situations. Most people (91%) have driving licences for day-to-day use.[84] Irelanddoes not issue mandatory national identity cards as such. Except for a brief period during the Second World War when the Irish Department of External Affairs issued identity cards to those wishing to travel to the United Kingdom,[85]Ireland has never issued national identity cards as such. Identity documentation is optional for Irish and British citizens. Nevertheless, identification is mandatory to obtain certain services such as air travel, banking, interactions regarding welfare and public services, age verification, and additional situations. "Non-nationals" aged 16 years and over must produce identification on demand to any immigration officer or a member of theGarda Síochána(police).[86] Passport booklets, passport cards, driving licences, GNIB Registration Certificates[87]and other forms of identity cards can be used for identification. Ireland has issued optionalpassport cardssince October 2015.[88]The cards are the size of a credit card and have all the information from the biographical page of an Irish passport booklet and can be used explicitly for travel in the EU and EFTA. Ireland issues a "Public Services Card" which is useful when identification is needed for contacts regarding welfare and public services. They have photographs but not birth dates and are therefore not accepted by banks. The card is also not considered as being an identity card by the Department of Employment Affairs and Social Protection (DEASP). In anOireachtas(parliament) committee hearing held on February 22, 2018, Tim Duggan of that department stated "A national ID card is an entirely different idea. People are generally compelled to carry (such a card)."[5] Anyone who is legally resident in Italy, whether a citizen or not, is entitled to request an identity card at the local municipality.[89]However, only Italian citizens can use it as a travel document in lieu of a passport, and get it on a consulate/embassy.[90] It is valid in all Europe (except in Belarus, Russia, Ukraine and the UK) and to travel to Turkey, Georgia, Egypt and Tunisia.[91] The Italian citizen is not legally required to carry an identification document, as they have the right to identify themselves verbally. However, if they are asked to present it by law enforcement and have it with them at that moment, they must show it to avoid committing an offense.[92][93]If public-security officers are not convinced of the claimed identity, such as may be the case for a verbally provided identity claim, they may keep the claimant in custody until his/her identity is ascertained;[94]such an arrest is limited to the time necessary for identification and has no legal consequence. Instead, all foreigners in Italy are required by law to have an ID with them at all times.[95]Citizens of EU member countries must be always ready to display an identity document that is legally government-issued in their country. Non-EU residents must have their passport with customs entrance stamp or a residence permit issued by Italian authorities; while all resident/immigrant aliens must have a residence permit (they are otherwise illegal and face deportation), foreigners from certain non-EU countries staying in Italy for a limited amount of time (typically for tourism) may be only required to have their passport with a proper customs stamp. The current Italian identity document is acontactlesselectronic card made ofpolycarbonatein theID-1format with many security features and containing the following items printed bylaser engraving:[96] Moreover, the embedded electronicmicroprocessorchip stores the holder's picture, name, surname, place and date of birth, residency and (only if aged 12 and more) two fingerprints.[97] The card is integrated into the ItalianSSOinfrastructure, theSPIDand permits the holder to use theNFCchip of the card as a login for that service. The card is issued by theMinistry of the Interiorin collaboration with theIPZSin Rome and sent to the applicant within 6 business days.[98] The validity is 10 years for adults, 5 years for minors aged 3–18, 3 years for children aged 0–3[89]and it is extended or shortened in order to expire always on birthday.[99] However, the old classic Italian ID card is still valid and in the process of being replaced with the new eID card since 4 July 2016,[100]because the lack of aMachine Readable Zone, the odd size, the fact that is made of paper and so easy to forge, often cause delays at border controls and, furthermore, foreign countries outside the EU sometimes refuse to accept it as a valid document. These common criticisms were considered in the development of the newItalian electronic identity card, which is in the more common credit-card format and now has many of the latest security features available nowadays. The Latvian "Personal certificate" is issued to Latvian citizens and is valid for travel withinEurope(except Belarus, Russia, Ukraine and UK), Georgia, French Overseas territories and Montserrat (max. 14 days). ThePrincipality of Liechtensteinhas a voluntary ID card system for citizens, the Identitätskarte. Liechtenstein citizens are entitled to use a validnational identity cardto exercise their right of free movement in the EU and EFTA[58][57][101] Lithuanian Personal Identity Card can be used as primary evidence of Lithuanian citizenship, just like a passport and can also be used as proof of identity both inside and outside Lithuania. It is valid for travel within most European nations. The Luxembourgish identity card is issued toLuxembourgish citizens. It serves as proof of identity and nationality and can also be used for travel within theEuropean Unionand a number of other European countries. Maltese identity cards are issued to Maltese citizens and other lawful residents of Malta. They can be used as a travel document when visiting countries in the European Union and the European Free Trade Association. Dutch citizens from the age of 14 are required to be able to show a valid identity document upon request by a police officer or similar official. Furthermore, identity documents are required when opening bank accounts and upon start of work for a new employer. Official identity documents for residents in the Netherlands are: For the purpose of identification in public (but not for other purposes), also a Dutchdriving licenseoften may serve as an identity document. In theCaribbean Netherlands, Dutch and other EU/EFTA identity cards are not valid; and theIdentity card BESis an obligatory document for all residents. In Norway there is no law penalising non-possession of an identity document. But there are rules requiring it for services like banking, air travel and voting (where personal recognition or other identification methods have not been possible). The following documents are generally considered valid (varying a little, since no law lists them):[102]Nordic driving licence, passport (often only from EU and EFTA), national ID card from EU, Norwegian ID card from banks and some more. Bank ID cards are printed on the reverse of Norwegian debit cards. To get a bank ID card either a Nordic passport or another passport together with Norwegian residence and work permit is needed. TheNorwegian identity cardwas introduced on November 30, 2020.[103][104]Two versions of the card exist, one that states Norwegian citizenship and is usable for exercising freedom of movement within the EU and EFTA[58][57][101]and the other for general identification.[105]The plan started in 2007 and was delayed several times.[106]Banks were campaigning to be freed from the task of issuing ID cards, stating that it was supposed to be the responsibility of state authorities.[107]Some banks ceased issuing ID cards, so people had to carry their passport for credit card purchases or buying prescribed medication if not in possession of a driving licence.[108][109] Foreign citizens resident in Norway are not allowed to get the Norwegian identity card. When banks stopped issuing the cards and it was suggested that citizens get a national identity card, foreign citizens who did not have a driving licence or a homeland passport were left outside of the system. Therefore, as of 2022 there are plans to issue a version of the Norwegian identity card for foreign citizens.[110] As of 2020, a digital ID document was introduced in Norway.[111]It requires aphone appand is useful for age checks, pick up of postal packages as well as other tasks. To activate it, a passport or national ID-card is needed. Many young people targeted for alcohol age checks or without a driver's license tend to use it, but also some older citizens.[112] Every Polish citizen 18 years of age or older residing permanently inPolandmust have an Identity Card (dokument tożsamości) issued by the local Office of Civic Affairs. Polish citizens living permanently abroad are entitled, but not required, to have one. All Portuguese citizens are required by law to obtain an Identity Card when they turn 6 years of age. They are not required to carry it at all times but are obliged to present it to the authorities if requested. The old format of the cards (yellow laminated paper document) featured a portrait of the bearer, their fingerprint, and the names of parent(s), among other information. They are currently being replaced by grey plastic cards with a chip, calledCartão de Cidadão(Citizen's Card), which now incorporate NIF (Tax Number), Cartão de Utente (Health Card) and Social Security, all of which are protected by a PIN obtained when the card is issued. The new Citizen's Card is technologically more advanced than the former Identity Card and has the following characteristics: Every citizen ofRomaniamust register for an ID card (Carte de identitate, abbreviatedCI) at the age of 14. The CI offers proof of the identity, address, sex and other data of the possessor. It has to be renewed every 10 years. It can be used instead of a passport for travel inside the European Union and several other countries outside the EU. Another ID card is the Provisional ID Card (Cartea de Identitate Provizorie) issued temporarily when an individual cannot get a normal ID card. Its validity extends for up to 1 year. It cannot be used in order to travel within the EU, unlike the normal ID card. Other forms of officially accepted identification include thedriver's licenseand the birth certificate. However, these are accepted only in limited circumstances and cannot take the place of the ID card in most cases. The ID card is mandatory for dealing with government institutions, banks or currency exchange shops. A valid passport may also be accepted, but usually only for foreigners. In addition, citizens can be expected to provide the personal identification number (CNP) in many circumstances; purposes range from simple unique identification and internal book-keeping (for example when drawing up the papers for the warranty of purchased goods) to being asked for identification by the police. The CNP is 13 characters long, with the format S-YY-MM-DD-RR-XXX-Y. Where S is the sex, YY is year of birth, MM is month of birth, DD is day of birth, RR is a regional id, XXX is a unique random number and Y is a control digit. Presenting the ID card is preferred but not mandatory when asked by police officers; however, in such cases people are expected to provide a CNP or alternate means of identification which can be checked on the spot (via radio if needed). The information on the ID card is required to be kept updated by the owner, current address of domicile in particular. Doing otherwise can expose the citizen to certain fines or be denied service by those institutions that require a valid, up to date card. In spite of this, it is common for people to let the information lapse or go around with expired ID cards. The Slovak ID card (Slovak:Občiansky preukaz) is a picture ID inSlovakia. It is issued to citizens of the Slovak Republic who are 15 or older. A Slovak ID card is officially recognised by all member states of theEU/EFTAfor travel. For travel outside the EU, Slovak citizens may also require theSlovak passport, which is a legally accepted form of picture ID as well. Police officers and some[who?]other officials have a right to demand to see one of those documents, and the law states that one is obliged to submit such a document at that very moment. If one fails to comply, law enforcement officers are allowed to insist on personal identification at the police station. Every Slovenian citizen regardless of age has the right to acquire an identity card (Slovene:osebna izkaznica), and every citizen of theRepublic of Sloveniaof 18 years of age or older is obliged by law to acquire one and carry it at all times (or any other identity document with a picture, e.g., the Slovene passport or a driver's license). The card is a valid identity document within all member states of theEuropean Unionfor travel within the EU. With the exception of the Faroe Islands and Greenland, it may be used to travel outside of the EU toNorway,Liechtenstein,BiH,Macedonia,Montenegro,Serbia, andSwitzerland. The front side displays the name and surname, sex, nationality, date of birth, and expiration date of the card, as well as the number of the ID card, a black and white photograph, and a signature. The back contains the permanent address, administrative unit, date of issue,EMŠO, and a code with key information in amachine-readable zone. Depending on the holder's age, the card is valid for 5 years or 10 years or permanently, and it is valid 1 year for foreigners living in Slovenia, in case of repeated loss, and in some other circumstances.[113]Since 28 March 2022, it is possible but not mandatory to acquire abiometricID card.[114]An identity document in healthcare institutions is thehealthcare insurance card. Since April 2023, a biometric ID card may be used instead.[115] InSpain, citizens, resident foreigners, and companies have similar but distinct identity numbers, some with prefix letters, all with a check-code:[116] Despite the NIF/CIF/NIE/NIF distinctions theidentity numberis unique and always has eight digits (theNIEhas 7 digits) followed by a letter calculated from a 23-Modular arithmeticcheckused to verify the correctness of the number. The letters I, Ñ, O and U are not used and the sequence is as follows: This number is the same for tax, social security and all legal purposes. Without this number (or a foreign equivalent such as a passport number), a contract may not be enforceable. In Spain the formal identity number on an ID card is the most important piece of identification. It is used in all public and private transactions. It is required to open a bank account, to sign a contract, to have state insurance and to register at a university and should be shown when being fined by a police officer.[120]It is one of the official documents required to vote at any election, although any other form of official ID such as a driving licence or passport may be used. The card also constitutes a validtravel documentwithin theEuropean Union.[121] Non-resident citizens of countries such as the United Kingdom, where passport numbers are not fixed for the holder's life but change with renewal, may experience difficulty with legal transactions after the document is renewed since the old number is no longer verifiable on a valid (foreign) passport. However, a NIE is issued for life and does not change and can be used for the same purposes. Sweden does not have a legal statute for compulsory identity documents. However, ID cards are regularly used to ascertain a person's identity when completing certain transactions. These include but are not limited to banking and age verification. Also, interactions with public authorities often require it, in spite of the fact that there is no law explicitly requiring it, because there are laws requiring authorities to somehow verify people's identity. Without Swedish identity documents difficulties can occur accessing health care services, receiving prescription medications and getting salaries or grants. From 2008, EU passports have been accepted for these services due to EU legislation (with exceptions including banking), but non-EU passports are not accepted. Identity cards have therefore become an important part of everyday life. There are currently three public authorities that issue ID cards: theSwedish Tax Agency, theSwedish Police Authority, and theSwedish Transport Agency. The Tax Agency cards can only be used within Sweden to validate a person's identity, but they can be obtained both by Swedish citizens and those that currently reside in Sweden. ASwedish personal identity numberis required. It is possible to get one without having any Swedish ID card. In this case a person holding such a card must guarantee the identity, and the person must be a verifiable relative or the boss at the company the person has been working or a few other verifiable people. The Police can only issue identity documents to Swedish citizens. They issue an internationally recognised identity card according to EU standard usable for intra-Europeantravel, and Swedish passports which are acceptable as identity documents worldwide.[122] The Transport Agency issuesdriving licences, which are valid as identity documents in Sweden. To obtain one, one must be approved as a driver and strictly have another Swedish identity document as proof of identity. In the past there have been certain groups that have experienced problems obtaining valid identification documents. This was due to the initial process that was required to validate one's identity, unregulated security requirements by the commercial companies which issued them. Since July 2009, the Tax Agency has begun to issue ID cards, which has simplified the identity validation process for foreign passport holders. There are still requirements for identity validation that can cause trouble, especially for foreign citizens, but the list of people who can validate one's identity has been extended. Swiss citizenshave noobligation of identificationinSwitzerlandand thus, are not required by law to be able to show a valid identity document upon request by a police officer or similar official. Furthermore, identity documents are required when opening a bank account or when dealing withpublic administration. Relevant in daily life of Swiss citizens are SwissID card[123]and Swissdriver's license;[124]the latter needs to be presented upon request by a police officer, when driving a motor-vehicle as e.g., a car, a motorcycle, a bus or a truck. Swiss citizens are entitled to use a validnational identity cardto exercise their right of free movement in EFTA[58]and the EU.[59] Swiss passport[125]is needed only for e.g., travel abroad to countries not accepting Swiss ID card as travel document. No national identity card in the principality. Passports and driving licenses are most commonly used for identification.[14]When visiting France or Spain a passport is needed in lack[clarification needed]of a national identity card, although driving licenses are often used and accepted unofficially. From January 12, 2009, the Government ofAlbaniais issuing a compulsoryelectronic and biometric ID Card(Letërnjoftim)for its citizens.[126]Every citizen at age 16 must apply for Biometric ID card. Azerbaijanissues a compulsoryID Card(Şəxsiyyət vəsiqəsi)for its citizens. Every citizen at age 16 must apply for ID card. Belarus has combined the internationalpassportand theinternal passportinto one document which is compulsory from age 14. It follows the international passport convention but has extra pages for domestic use. Bosnia and Herzegovina allows every person over the age of 15 to apply for an ID card, and all citizens over the age of 18 must have the national ID card with them at all times. A penalty is issued if the citizen does not have the acquired ID card on them or if the citizen refuses to show proof of identification. The Kosovo Identity Card is an ID card issued to the citizens of Kosovo for the purpose of establishing their identity, as well as serving as proof of residency, right to work and right to public benefits. It can be used instead of a passport for travel to some neighboring countries. In Moldova, identity cards (Romanian:Carte de identitate) have been issued since 1996. The first person to get identity card was former president of Moldova –Mircea Snegur. Since then, all the Moldovan citizens are required to have and use it inside the country. It cannot be used to travel outside the country; however, it is possible to pass the so-calledTransnistrianborder with it. The Moldovan identity card may be obtained by a child from his/her date of birth. The state Public Services Agency is responsible for issuing identity cards and for storing data of all Moldovan citizens. Monégasque identity cards are issued toMonégasque citizensand can be used for travel within theSchengen Area. InMontenegroevery resident citizen over the age of 14 can have theirLična kartaissued, and all persons over the age of 18 must have ID cards and carry them at all times when they are in public places. It can be used for international travel toBosnia and Herzegovina,Serbia,North Macedonia,KosovoandAlbaniainstead of thepassport. The identity card of North Macedonia (Macedonian:Лична карта, Lična karta) is a compulsory identity document issued inNorth Macedonia. The document is issued by the police on behalf of the Ministry of Interior. Every citizen over 18 must be issued this identity card. The role of identity documentation is primarily played by the so-calledRussian internal passport, a passport-size booklet which contains a person's photograph, birth information and other data such as registration at the place of residence (informally known aspropiska), marital data, information about military service and underage children. Internal passports are issued by theMain Directorate for Migration Affairsto all citizens who reach their 14th birthday and do not reside outside Russia. They are re-issued at the age 20 and 45. The internal passport is commonly considered the only acceptable ID document in governmental offices, banks, while traveling by train or plane, getting a subscription service, etc. If the person does not have an internal passport (i.e., foreign nationals or Russian citizens who live abroad), an international passport can be accepted instead, theoretically in all cases. Another exception is army conscripts, who produce theIdentity Card of the Russian Armed Forces. Internal passports can also be used to travel to Belarus, Kazakhstan, Tajikistan, Kyrgyzstan, Abkhazia andSouth Ossetia.[citation needed] Other documents, such as driver's licenses or student cards, can sometimes be accepted as ID, subject to regulations. The national identity card is compulsory for all Sanmarinese citizens.[127]Biometric and valid for international travel since 2016. InSerbiaevery resident citizen over the age of 10 can have theirLična kartaissued, and all persons over the age of 16 must have ID cards and carry them at all times when they are in public places.[128]It can be used for international travel toBosnia and Herzegovina,MontenegroandMacedoniainstead of the passport.[129]Contact microchip on ID is optional. Kosovoissues its ownidentity cards. These documents are accepted by Serbia when used as identification while crossing the Serbia-Kosovo border.[130]They can also be used for international travel toMontenegro[131]andAlbania.[132] The Turkish national ID card (Turkish:Nüfus Cüzdanı) is compulsory for all Turkish citizens from birth. Cards for males and females have a different color. The front shows the first and last name of the holder, first name of legal ascendants, birth date and place, and an 11 digit ID number. The back shows marital status, religious affiliation, the region of the county of origin, and the date of issue of the card. On February 2, 2010, the European Court of Human Rights ruled in a 6 to 1 vote that the religious affiliation section of the Turkish identity card violated articles 6, 9, and 12 of the European Convention of Human Rights, to which Turkey is a signatory. The ruling should coerce the Turkish government to completely omit religious affiliation on future identity cards. The Turkish police are allowed to ask any person to show ID, and refusing to comply may lead to arrest. It can be used for international travel toNorthern Cyprus,GeorgiaandUkraineinstead of a passport. Ministry of Interiorof Turkey released EU-like identity cards for all Turkish citizens in 2017. New identity cards are fully biometric and can be used as a bank card, bus ticket or at international trips. The Ukrainian identity card or Passport of the Citizen of Ukraine (also known as theInternal passportor Passport Card) is an identity document issued to citizens of Ukraine. Every Ukrainian citizen aged 14[133]or above and permanently residing in Ukraine must possess an identity card issued by local authorities of the State Migration Service of Ukraine. Ukrainian identity cards are valid for 10 years (or 4 years, if issued for citizens aged 14 but less than 18) and afterwards must be exchanged for a new document. As of July 2021, the UK has no national identity card and has no generalobligation of identification, although drivers may be required to produce their licence and insurance documents to a police station within 7 days of a traffic stop if they are not able to provide them at the time. The UK had an identity card during World War II as part of a package of emergency powers; this was abolished in 1952 by repealing theNational Registration Act 1939. Identity cards were first proposed in the mid-1980s for people attending football matches, following a series of high-profilehooliganismincidents involving English football fans. However, this proposed identity card scheme never went ahead asLord Taylor of Gosforthruled it out as "unworkable" in theTaylor Reportof 1990. TheIdentity Cards Act 2006implemented a national ID scheme backed by a National Identity Register – an ambitious database linking a variety of data including Police, Health, Immigration, Electoral Rolls and other records. Several groups such asNo2IDformed to campaign against ID cards in Britain and more importantly the NIR database, which was seen as a "panopticon" and a significant threat to civil liberties. The scheme saw setbacks after theLoss of United Kingdom child benefit data (2007)and other high-profiledata lossesturned public opinion against the government storing large, linked personal datasets. Various partial rollouts were attempted such as compulsory identity cards for non-EU residents in Britain (starting late 2008), with voluntary registration for British nationals introduced in 2009 and mandatory registration proposed for certain high-security professions such as airport workers. However, the mandatory registrations met with resistance from unions such as theBritish Airline Pilots' Association.[134] After the2010 general electiona new coalition government was formed. Both parties had pledged to scrap ID cards in their election manifestos. The 2006 act was repealed by theIdentity Documents Act 2010which also required that the nascent NIR database be destroyed. TheHome Officeannounced that the national identity register had been destroyed on February 10, 2011.[135]Prior to the 2006 Act, work had started to updateBritish passportswith RFID chips to support the use ofePassport gates. This continued, with traditional passports being replaced with RFID versions on renewal. Driving licences, particularly the photocard driving licence introduced in 1998, andpassportsare now the most widely used ID documents in the United Kingdom, but the former cannot be used as travel documents, except within theCommon Travel Area. However, driving licences from the UK and EU countries are usually accepted within EU and EFTA countries for identity verification. Most people do not carry their passports in public without knowing in advance that they are going to need them as they do not fit in a typicalwalletand are relatively expensive to replace. Consequently, driving licences are the most common and convenient form of ID in use, along withPASS-accredited cards, used mainly for proof-of-age purposes. Unlike a travel document, they do not show the holder's nationality or immigration status. For proof-of-age purchases, aprovisional driving licenseis often used by those who do not hold a full driving license, as they are easy to obtain. Generally, in day-to-day life, most authorities do not ask for identification from individuals in a sudden, spot-check type manner, such as by police or security guards, although this may become a concern in instances ofstop and search[clarification needed–discuss]. Gibraltar has operated an identity card system since 1943. The cards issued were originally folded cardboard, similar to the wartime UK Identity cards abolished in 1950. There were different colours for British and non-British residents. Gibraltar requires all residents to hold identity cards, which are issued free. In 1993 the cardboard ID card was replaced with a laminated version. However, although valid as atravel documentto the UK, they were not accepted by Spain. A new version in an EU-compliant format was issued and is valid for use throughout the EU, although as very few are seen, there are sometimes problems when used, even in the UK. ID cards are needed for some financial transactions, but apart from that and to cross the frontier with Spain, they are not in common use.[citation needed] Called the "Identification Card R.R". Optional, although compulsory for voting and other government transactions. Available also for any Commonwealth country citizen who has lived in Belize for a year without leaving and been at least 2 months in an area where the person has been registered in.[136][137] In Canada, different forms of identification documentation are used, but there is no de jure national identity card. TheCanadian passportis issued by the federal (national) government, and the provinces and territories issue various documents which can be used for identification purposes. The most commonly used forms of identification within Canada are the health card anddriver's licenceissued by provincial and territorial governments. The widespread usage of these two documents for identification purposes has made them de facto identity cards. In Canada, a driver's license usually lists the name, home address, height and date of birth of the bearer. A photograph of the bearer is usually present, as well as additional information, such as restrictions to the bearer's driving licence. The bearer is required by law to keep the address up to date.[citation needed] A few provinces, such as Québec and Ontario, issue provincial health care cards which contain identification information, such as a photo of the bearer, their home address, and their date of birth. British Columbia, Saskatchewan and Ontario are among the provinces that produce photo identification cards for individuals who do not possess a driving licence, with the cards containing the bearer's photo, home address, and date of birth.[138][139][140] For travel abroad, a passport is almost always required. There are a few minor exceptions to this rule; required documentation to travel among North American countries is subject to theWestern Hemisphere Travel Initiative, such as theNEXUS programmeand theEnhanced Drivers Licenseprogramme implemented by a few provincial governments as a pilot project. These programmes have not yet gained widespread acceptance, and the Canadian passport remains the most useful and widely accepted international travel document. Optional and not fully launched. Legislation was enacted in 2022.[141][142][143] EveryCosta Ricancitizenmust carry anidentity cardimmediately after turning 18. The card is namedCédula de Identidadand it is issued by the local registrar's office (Registro Civil), an office belonging to the local elections committee (Tribunal Supremo de Elecciones), which in Costa Rica has the same rank as the Supreme Court. Each card has a unique number composed of nine numerical digits, the first of them being the province where the citizen was born (with other significance in special cases such as granted citizenship to foreigners,adopted persons, or in rare cases, old people for whom nobirth certificatewas processed at birth). After this digit, two blocks of four digits follow; the combination corresponds to the unique identifier of the citizen. It is widely requested as part of every legal and financial purpose, often requested at payment withcreditordebit cardsfor identification guarantee and requested for buyingalcoholic beveragesor cigarettes or upon entrance to adults-only places like bars. The card must be renewed every ten years and is freely issued again if lost. Among the information included there are, on the front, two identification pictures and digitized signature of the owner, identification number (known colloquially just as thecédula), first name, first and second-last names and an optionalknown asfield. On the back, there is again the identification number, birth date, where the citizen issues its vote for national elections or referendums, birthplace, gender, date when it must be renewed and amatrix codethat includes all this information and even a digitized fingerprint of the thumb and index finger. The matrix code is not currently being used nor inspected by any kind of scanner. Besides this identification card, every vehicle driver must carry adriving licence, an additional card that uses the same identification number as the ID card (Cédula de Identidad) for the driving license number. A passport is also issued with the same identification number used in the ID card. The same situation occurs with the Social Security number; it is the same number used for the ID card. All non-Costa Rican citizens with aresident statusmust carry an ID card (Cédula de Residencia), otherwise, a passport and a valid visa. Each resident's ID card has a unique number composed of 12 digits; the first three of them indicate their nationality and the rest of them a sequence used by the immigration authority (called Dirección General de Migración y Extranjería). As with the Costa Rican citizens, their Social Security number and their driver's license (if they have it) would use the same number as in their own resident's ID card. A "Cédulade Identidad y Electoral" (Identity and Voting Document) is a National ID that is also used for voting in both Presidential and Congressional ballots. Each "Cédula de Identidad y Electoral" has its unique serial number composed by the serial of the municipality of current residence, a sequential number plus a verification digit. This National ID card is issued to all legal residents ofadultage. It is usually required to validate job applications, legally binding contracts, official documents, buying/sellingreal estate, opening a personal bank account, obtaining aDriver's Licenseand the like. It is issued free of charge[144]by the "Junta Central Electoral" (Central Voting Committee) to allDominicansnot living abroad at the time of reachingadulthood(16 years of age) or younger is they arelegally emancipated. Foreigners who have taken permanent residence and have not yet applied for Dominicannaturalization(i.e., have not opted for Dominican citizenship but have taken permanent residence) are required to pay an issuing tariff and must bring along their non-expired Country of Origin passport and deposit photocopies of their Residential Card and Dominican Red Cross Blood Type card. Foreigners residing on a permanent basis must renew their "Foreign ID" on a 2-, 4-, or 10-year renewal basis (about US$63–US$240, depending on desired renewal period).[145] In El Salvador, ID Card is called Documento Único de Identidad (DUI) (Unique Identity Document). Every citizen above 18 years must carry this ID for identification purposes at any time. It is not based on a smartcard but on a standard plastic card with two-dimensional bar-coded information with picture and signature. In January 2009, the National Registry of Persons (RENAP) inGuatemalabegan offering a new identity document in place of theCédula de Vecindad(neighborhood identity document) to all Guatemala citizens and foreigners. The new document is called "Documento Personal de Identification" (DPI) (Personal Identity Document). It is based on a smartcard with a chip and includes an electronic signature and several measures against fraud.[146] Optional, although compulsory for voting and other government transactions.[147]Since 2022 a brand new biometric National ID Card has been unveiled, free of charge for Jamaican citizens.[148][149] Not mandatory, but needed in almost all official documents, theCURPis the standardized version of an identity document. It actually could be a printed green wallet-sized card (without a photo) or simply an 18-character identification key printed on a birth or death certificate.[150] While Mexico has a national identity card (cédula de identitad personal), it is only issued to children aged 4–17.[151] Unlike most other countries, Mexico has assigned a CURP to nearly all minors, since both the government and most private schools ask parent(s) to supply their children's CURP to keep a data base of all the children. Also, minors must produce their CURP when applying for a passport or being registered at Public Health services by their parent(s). Most adults need the CURP code too, since it is required for almost all governmental paperwork like tax filings and passport applications. Most companies ask for a prospective employee's CURP, voting card, or passport rather than birth certificates.[citation needed] To have a CURP issued for a person, a birth certificate or similar proof must be presented to the issuing authorities to prove that the information supplied on the application is true. Foreigners applying for a CURP must produce a certificate of legal residence in Mexico. Foreign-born naturalized Mexican citizens must present their naturalization certificate. On August 21, 2008, the Mexican cabinet passed the National Security Act, which compels all Mexican citizens to have a biometric identity card, called Citizen Identity Card (Cédula de identidad ciudadana) before 2011.[citation needed] On February 13, 2009, the Mexican government designated the state ofTamaulipasto start procedures for issuing a pilot program of the national Mexican ID card.[citation needed] Although the CURP is thede jureofficial identification document in Mexico, theInstituto Nacional Electoral'svoting cardis thede factoofficial identification and proof of legal age for citizens of ages 18 and older. On July 28, 2009, Mexican President Felipe Calderón, facing the Mexican House of Representatives, announced the launch of the Mexican national Identity card project, which will see the first card issued before the end of 2009. Thecédula de identidad personalis required at age 12 (cedula juvenil) and age 18. Panamanian citizens must carry theircédulaat all times. New biometric national identity cards rolled out in 2019. The card must be renewed every 10 years (every 5 years for those under 18), and it can only be replaced 3 times (with each replacement costing more than the previous one) without requiring a background check, to confirm and verify that the card holder is not selling his or her identity to third parties for human trafficking or other criminal activities. All cards have QR, PDF417, and Code 128 barcodes. The QR code holds all printed (on the front of the card) text information about the card holder, while the PDF417 barcode holds, in JPEG format encoded with Base64, an image of the fingerprint of the left index finger of the card holder. Panamanian biometric/electronic/machine readable ID cards are similar to biometric passports and current European/Czech national ID cards and have only a small PDF417 barcode, with a machine readable area, a contactless smart card RFID chip, and golden contact pads similar to those found in smart card credit cards and SIM cards. The machine-readable code contains all printed text information about the card holder (it replaces the QR code) while both chips (the smart card chip is hidden under the golden contact pads) contain all personal information about the card holder along with a JPEG photo of the card holder, a JPEG photo with the card holder's signature, and another JPEG photo but with all 10 fingerprints of both hands of the card holder. Earlier cards used Code 16K and Code 49 barcodes with magnetic stripes.[152][153] There is no compulsory federal-level ID card that is issued to all U.S. citizens. U.S. citizens and nationals may obtainpassportsorU.S. passport cardsif they choose to, but this is optional and other alternatives are more popular. For most people,driver's licensesissued by the respectivestateand territorial governments have become thede factoidentity cards, and are used for many identification purposes, such as when purchasing alcohol and tobacco, opening bank accounts, and boarding planes, along with confirming a voter's identity in states withvoter photo identificationlaws. Individuals who do not drive can obtain an identification card with the same functions from the state agency that issues driver's licenses. In addition, many schools issue student and teacher ID cards.[154] The United States passed theREAL ID Acton May 11, 2005. The bill requires states to redesign their driver's licenses to comply with federal security standards by December 2009. Federal agencies would then reject licenses or identity cards that do not comply, which would force Americans accessing everything from airplanes to courthouses to have federally mandated cards. At airports, those not having compliant licenses or cards would be redirected to a secondary screening location.[155]As of 2024, every state has implemented some ID that satisfies the standard.[156] In 2006, theU.S. State Departmentstudied the idea of issuing passports withradio-frequency identification, or RFID, chips embedded in them. TheUnited States passportverifies both personal identity and citizenship, but is not mandatory for citizens to possess within the country and is issued by the U.S. State Department on a discretionary basis. Since February 1, 2008, U.S. citizens may apply forpassport cards, in addition to the usualpassportbooks. Although their main purpose is forland and sea travel within North America, the passport card may also be accepted by federal authorities (such as for domestic air travel or entering federal buildings). TheTransportation Security Administration(TSA) accepts the passport card as an identity document at airport security checkpoints.[157] U.S. Citizenship and Immigration Servicesallows the U.S. passport card to be used in the Employment Eligibility Verification FormI-9 (form)process.[158]The passport card is considered a "List A" document that may be presented by newly hired employees during the employment eligibility verification process to show work authorized status. "List A" documents are those used by employees to prove both identity and work authorization when completing the Form I-9. The basic document needed to establish a person's identity and citizenship in order to obtain a passport is abirth certificate. These are issued by either the U.S. state of birth or by the U.S. Department of State for overseas births to U.S. citizens. A child born in the U.S. is in nearly all cases (except for children of foreign diplomats) automatically a U.S. citizen. The parents of a child born overseas to U.S. citizens can report the birth to the U.S. embassy/consulate to obtain a Consular Report of Birth Abroad.[159] Social Security numbers(SSNs) and cards are issued by the U.S.Social Security Administrationfor trackingSocial Securitytaxes and benefits. They have become thede factonational identification number for federal and state taxation, private financial services, and identification with various companies. SSNs do not establish citizenship because they can also be issued to permanent residents. They typically can only be part of the establishment of a person's identity; a photo ID that verifies date of birth is also usually requested. A mix of documents can be presented to, for instance, verify one's legal eligibility to take a job within the United States.Identityandcitizenshipis established by presenting a passport alone, but this must be accompanied by a Social Security card for taxation ID purposes. A driver's license/state ID establishesidentityalone, but does not establishcitizenship, as these can be provided to non-citizens as well. In this case, an applicant without a passport may sign an affidavit of citizenship or be required to present a birth certificate. They must also submit their Social Security number. "Residency" within a certain U.S. jurisdiction, such as a voting precinct, can be proven if the driver's license or state ID has the home address printed on it corresponding to that jurisdiction. Utility bills or other pieces of official printed mail can also suffice for this purpose. In the case of voter registration, citizenship must also be proven with a passport, birth certificate, or signed citizenship affidavit. TheSelective Service Systemhas in the past, in times of a military draft, issued an identification card for men that were eligible for the draft. Australia does not have a national identity card. Instead, various identity documents are used or required to prove a person's identity, whether for government or commercial purposes. Currently,driver licencesandphoto cards, both issued by thestates and territories, are the most widely used personal identification documents in Australia. Additionally, theAustralia Post Keypass identity card, issued byAustralia Post, can be used by people who do not have an Australian drivers licence or an Australian state and territory issued identity photo card. Photo cards are also called "Proof of Age Cards" or similar and can be issued to people as another type of identity. Identification indicating age is commonly required to purchase alcohol and tobacco and to enter nightclubs and gambling venues. Other important identity documents include a passport, an official birth certificate, an official marriage certificate, cards issued by government agencies (typically social security cards), some cards issued by commercial organisations (e.g., a debit or credit card), and utility accounts. Often, some combination of identity documents is required, such as an identity document linking a name, photograph and signature (typically photo-ID in the form of a driver licence or passport), evidence of operating in the community, and evidence of a current residential address. New alcohol laws in the state of Queensland require some Brisbane-based pubs and bars to scan ID documents against a database of people who should be denied alcohol, for which foreign passports and driver's licences are not valid.[160] An "Identification Card" seems to exists among citizens of the Marshall Islands, but little information is found on these documents.[citation needed] National Identity cards, called "FSM Voters National Identity card", are issued on an optional basis, free of charge. The Identity Cards were introduced in 2005.[161] New Zealand does not have an official ID card. The most commonly carried form of identification is a driver licence issued by the Transport Agency. Other forms of special purpose identification documents are issued by different government departments, for example a Firearms Licence issued to gun owners by the Police and the SuperGold card issued to elderly people by the Ministry of Social Development. For purchasing alcohol or tobacco, the only legal forms of identification is a New Zealand or foreign passport, a New Zealand driver licences and a Kiwi Access Card (formerly known as 18+ cards)[162]from the Hospitality Association of New Zealand.[163]Overseas driver licences are not legal for this purpose. For opening a bank account, each bank has its own list of documents that it will accept. Generally speaking, banks accept a foreign or NZ passport, a NZ Firearms Licence, or a foreign ID card by itself. If the customer do not have these documents, they will need to produce two different documents on the approved list (for example a driver licence and a marriage certificate).[164] Republic of Palau Identification Cards are primarily issued to foreign nationals whom are not eligible to acquire a Palau passport or driver's license, under the Digital Residency Act. Foreign nationals are required to undergo a sanctions check. E-National ID cards were rolled out in 2015.[165] "National Voter's Identity card" are optional upon request.[166][167] Tonga's National ID Card was first issued in 2010, and it is optional, along with the driver's licenses and passports. Either one of these are mandatory for to vote though. Applicants need to be 14 years of age or older to apply for a National ID Card.[168] National Identity Cards are being issued since October 2017. Plans for rolling out biometric cards were due for the late 2018.[169][170] Documento Nacional de Identidad (DNI; "National Identity Document") is the main identity document for Argentine residents. It is issued as a card (tarjeta DNI) at birth to all people born in the country (and hence citizens), and to foreigners who register as residents with the National Directorate of Migrations. It must be updated at 8 and 14 years of age, and thereafter every 15 years. The documents are produced at a special plant inBuenos Airesby the Argentine national registry of people (ReNaPer).[171]The National Identity Document (DNI) is the sole instrument for personal identification and is mandatory. Its format and use are regulated by Law No. 17671 on Identification, Registration, and Classification of the National Human Potential, enacted in 1968, replacing the enrolment document issued to men undergoing mandatory military service andlibreta cívicagiven to women upon turning 18. According to this law, the DNI cannot be substituted by any other document for legal purposes. It is required for voting, which is mandatory, and for identification before judicial authorities. The Argentine DNI is also required for conducting procedures with state authorities, and entitles adult bearers to work within the country. From November 4, 2009, as part of a modernization and digitization process of national documents, a new type of DNI with both a booklet and a card was issued; either may be used for most purposes, but the booklet had to be used for voting. The DNI booklet had a light blue cover with laser printing for the citizen's unique number and silver prints for the rest of the presentation. Internally, it had an identical design to the card format but included spaces for marital status, address changes, organ donations, and the stamping of the DNI after voting in national elections. The card was entirely laminated and contained all the individual's data, including a photograph and fingerprint impression. From 2011, the DNI underwent further changes, with no booklet and a different card design, a higher-quality plastic card to be used for all purposes. The booklet was no longer required for voting by holders of the still-valid 2009 version. Since April 1, 2017, the DNI card is the only valid identification document. Taxpayers also have a Unique Tax Identification Code (CUIT). In December 2023, the National Registry of Persons of Argentina (Renaper), a subsidiary of the Ministry of the Interior, introduced the Biometric National Identity Document (DNI). It adheres to international standards of security, with an embeddedelectronic chipand aQR codefor electronic validation, identity verification, digital functions, and advanced security measures. Manufactured using laser technology on polycarbonate, a durable material, the new document has up-to-date physical security features to enhance visual verification and prevent counterfeiting.[172] In Brazil, at the age of 18, all Brazilian citizens are supposed to be issued acédula de identidade(ID card), usually known by its number, theRegistro Geral (RG), Portuguese for "General Registry". The cards are needed to obtain a job, to vote, and to use credit cards. Foreigners living in Brazil have a different kind of ID card. Since the RG is not unique, being issued in a state-basis, in many places theCPF(the Brazilian revenue agency's identification number) is used as a replacement. The current Brazilian driver's license contains both the RG and the CPF, and as such can be used as an identification card as well. There are plans in course to replace the current RG system with a newDocumento Nacional de Identificação(National Identification Document), which will be electronic (accessible by amobile application) and national in scope, and to change the current ID card to a newsmartcard.[173][174] Upon turning 18 every resident inColombiamust obtain an identity document (Spanish:Cédula de CiudadaníaorDocumento de Identidad), which is the only document that proves the identity of a person for legal purposes. ID cards must be carried at all times and must be presented to the police upon request. If the individual fails to present the ID card upon request by the police or the military, he/she is most likely going to be detained at police station even if he/she is not a suspect of any wrongdoing. ID cards are needed to obtain employment, open bank accounts, obtain a passport, driver's license, military card, to enroll in educational institutions, vote or enter public buildings including airports and courthouses; failure to produce ID is a misdemeanor punishable with a fine. ID duplicate costs must be assumed by citizens. Every resident over the age of 14 is issued an identity card called (Tarjeta de Identidad) Every resident of Chile over the age of 18 must have and carry at all times their ID Card calledCédula de Identidadissued by theCivil Registry and Identification Service of Chile. The identity card is the official document that proves the identity of a Chilean person. Among the data it contains is the full name, Unique National Role (RUN) and sex, in addition to the photo, signature and fingerprint. Anyone who wants their profession to appear on their identity card must be registered in the professional registry. This is the only official form of identification for residents in Chile and is widely used and accepted as such. It is necessary for every contract, most bank transactions, voting, driving (along with the driver's licence) and other public and private situations. Biometrics collection is mandatory.[175] InPeru, it is mandatory for all citizens over the age of 18, whether born inside or outside the territory of the Republic, to obtain aNational Identity Document(Documento Nacional de Identidad). The DNI is a public, personal and untransferable document. TheDNIis the only means of identification permitted for participating in any civil, legal, commercial, administrative, and judicial acts. It is also required forvotingand must be presented to authorities upon request. The DNI can be used as apassportto travel to all South American countries that are members ofUNASUR. The DNI is issued by the National Registry of Identification and Civil Status (RENIEC). For Peruvians abroad, service is provided through the Consulates of Peru, in accordance with Articles 26, 31 and 8 of Law No. 26,497. The document is card-sized as defined by ISO format ID-1 (prior to 2005 the DNI was size ISO ID-2; renewal of the card due to the size change was not mandatory, nor did previously-emitted cards lose validity). The front of the card presents photographs of the holder's face, their name, date and place of birth (the latter in coded form),genderandmarital status; the bottom quarter consists ofmachine-readabletext. Three dates are listed as well; the date the citizen was first registered at RENIEC; the date the document was issued; and the expiration date of the document. The back of the DNI features the holder's address (includingdistrict,department and/or province) and voting group. Eight voting record blocks are successively covered with metallic labels when the citizen presents themselves at their voting group on voting days. The back also denotes whether the holder is anorgan donor, presents the holder's right index finger print, aPDF417bar code, and a 1Dbar code. InUruguay, the identity card (documento de identidad) is issued by theMinistry of the Interiorand the National Civil Identification Bureau (Dirección Nacional de Identificación Civil| DNIC).[176] It is mandatory and essential for several activities at either governmental or private levels. The document is mandatory for all inhabitants of the Oriental Republic of Uruguay, whether they are native citizens, legal citizens, or resident aliens in the country, even for children as young as 45 days old. It is a laminated card 9 cm (3.5 in) wide and approximately 5 cm (2.0 in) high, dominated by the color blue, showing the flag in the background with the photo of the owner, the number assigned by the DNIC (including a self-generated or check digit), full name, and the corresponding signature along with biometrics. The card is bilingual in Spanish and Portuguese.[177] Identity cards are required for most formal transactions, from credit card purchases to any identity validation, proof of age, and so on. The identity card is not to be confused with theCredencial Cívica, which is used exclusively for voting.[178] Identity cards in Venezuela consist of a plastic-laminated paper which contains the national ID number (Cédula de Identidad) as well as a color-photo and the last names, given names, date of birth, right thumb print, signature, andmarital status(single, married, divorced, widowed) of the bearer. It also contains the documents expedition and expiration date. Two different prefixes can be found before the ID number: "V" for Venezuelans and "E" for foreigners (extranjerosin Spanish). This distinction is also shown in the document at the very bottom by a bold all-caps typeface displaying either the word VENEZOLANO or EXTRANJERO, respectively. Despite Venezuela being the second country in the Americas (after the United States) to adopt abiometric passport, the current Venezuelan ID document is remarkably low-security, even for regional standards. It can hardly be called a card. The paper inside the laminated cover contains only two security measures, first, it is a special type of government-issued paper, and second, it has microfilaments in the paper that glow in the presence of UV light. The laminated cover itself is very simplistic and quite large for the paper it covers and the photo, although is standard sized (3x3.5 cm) is rather blurred. Government officials in charge of issuing the document openly recommend each individual to cut the excess plastic off and re-laminate the document in order to protect it from bending. The requirements for getting a Venezuelan identity document are quite relaxed and Venezuela lacks high-security in its birth certificates and other documents that give claim to citizenship.
https://en.wikipedia.org/wiki/Identity_document
ASPARQL Query Results XML(also sometimes calledSPARQL Results Document) is a file stores data (value,URIand text) in XML. This document is generally the response by default of a RDF database after aSPARQLquery. This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/SPARQL_Query_Results_XML_Format
Thenatural element method (NEM)[1][2][3]is ameshless methodto solvepartial differential equation, where theelementsdo not have a predefined shape as in thefinite element method, but depend on the geometry.[4][5][6] AVoronoi diagrampartitioning the space is used to create each of these elements. Natural neighbor interpolation functionsare then used to model the unknown function within each element. When the simulation is dynamic, this method prevents the elements to be ill-formed, having the possibility to easily redefine them at each time step depending on the geometry.
https://en.wikipedia.org/wiki/Natural_element_method
Thedead Internet theoryis aconspiracy theorythat asserts, due to a coordinated and intentional effort, theInternetnow consists mainly ofbot activityandautomatically generated contentmanipulated byalgorithmic curationto control the population and minimize organic human activity.[1][2][3][4][5]Proponents of the theory believe thesesocial botswere created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers.[6][7]Some proponents of the theory accuse government agencies of using bots to manipulate public perception.[2][6]The date given for this "death" is generally around 2016 or 2017.[2][8][9]The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory.[2][4][10] The dead Internet theory's exact origin is difficult to pinpoint. In 2021, a post titled "Dead Internet Theory: Most Of The Internet Is Fake" was published onto the forumAgora Road's Macintosh Cafe esoteric board by a user named "IlluminatiPirate",[11]claiming to be building on previous posts from the same board and fromWizardchan,[2]and marking the term's spread beyond these initial imageboards.[2][12]The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels.[2]It gained more mainstream attention with an article inThe Atlantictitled "Maybe You Missed It, but the Internet 'Died' Five Years Ago".[2]This article has been widely cited by other articles on the topic.[13][12] The dead Internet theory has two main components: that organic human activity on the web has been displaced by bots and algorithmically curated search results, and that state actors are doing this in a coordinated effort to manipulate the human population.[3][14][15]The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" inNew Yorkmagazine.[2][16][14]The Dead Internet Theory goes on to include thatGoogle, and other search engines, are censoring the Web by filtering content that is not desirable by limiting what is indexed and presented in search results.[3]While Google may suggest that there are millions of search results for a query, the results available to a user do not reflect that.[3]This problem is exacerbated by the phenomenon known aslink rot, which is caused when content at a website becomes unavailable, and all links to it on other sites break.[3]This has led to the theory that Google is aPotemkin village, and the searchable Web is much smaller than we are led to believe.[3]The Dead Internet Theory suggests that this is part of the conspiracy to limit users to curated, and potentially artificial, content online. The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally limiting users to curated, and potentially artificial AI-generated content, to manipulate the human population for a variety of reasons.[2][14][15][3]In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence-poweredgaslightingof the entire world population."[2][6] Caroline Busta, founder of the media platformNew Models, was quoted in a 2021 article inThe Atlanticcalling much of the dead Internet theory a "paranoid fantasy," even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea.”[2]In an article inThe New Atlantis,Robert Mariani called the theory a mix between a genuine conspiracy theory and acreepypasta.[6] In 2024, the dead Internet theory was sometimes used to refer to the observable increase in content generated vialarge language models(LLMs) such asChatGPTappearing in popular Internet spaces without mention of the full theory.[1][17][18][19]In a 2025 article byThomas Sommerer, this portion of the Dead Internet Theory is explored, with Sommerer calling the displacment of human generated content with Artificial content "an inevitable event."[18]Sommerer states the Dead Internet Theory is not scientific in nature, but reflects the public perception of the Internet.[18]Another article in theJournal of Cancer Educationdiscussed the impact of the perception of the Dead Internet Thoery in online cancer support forums, specifically focusing on the psycological impact on patience who find that support is coming from a LLM and not a genuine human.[19]The article also discussed the possible problems in training data for LLMs that could emerge from using AI generated content to train the LLMs.[19] Generative pre-trained transformers(GPTs) are a class oflarge language models(LLMs) that employartificial neural networksto produce human-like content.[20][21]The first of these to be well known was developed byOpenAI.[22]These models have created significant controversy. For example, Timothy Shoup of theCopenhagen Institute for Futures Studiessaid in 2022, "in the scenario whereGPT-3'gets loose', the internet would be completely unrecognizable".[23]He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030.[23]These predictions have been used as evidence for the dead internet theory.[13] In 2024,Googlereported that its search results were being inundated with websites that "feel like they were created for search engines instead of people".[24]In correspondence withGizmodo, a Google spokesperson acknowledged the role ofgenerative AIin the rapid proliferation of such content and that it could displace more valuable human-made alternatives.[25]Bots using LLMs are anticipated to increase the amount of spam, and run the risk of creating a situation where bots interacting with each other create "self-replicating prompts" that result in loops only human users could disrupt.[5] ChatGPTis an AIchatbotwhose late 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before.[8][26]Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT gives the average internet user access to large-language models.[8][26]This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content.[8][26][27][5][28] In 2016, the security firmImpervareleased a report on bot traffic and found that automated programs were responsible for 52% of web traffic.[29][30]This report has been used as evidence in reports on the dead Internet theory.[2]Imperva's report for 2023 found that 49.6% of internet traffic was automated, a 2% rise on 2022 which was partly attributed to artificial intelligence modelsscraping the webfor training content.[31] In 2024, AI-generated images onFacebook, referred to as "AI slop", began going viral.[35][36]Subjects of these AI-generated images included various iterations ofJesus"meshed in various forms" with shrimp, flight attendants, and black children next to artwork they supposedly created. Many of those said iterations have hundreds or even thousands of AI comments that say "Amen".[37][38]These images have been referred as an example for why the Internet feels "dead".[39]Sommerer discussed Shrimp Jesus in detail within his article as a symbol to represent the shift in the Interent, specifically stating "Just as Jesus was supposedly the messenger for God, Shrimp Jesus is the messenger for the fatal system maneuvered ourselves into. Decoupled, proliferated, and in a state of exponential metastasis."[18] Facebook includes an option to provide AI-generated responses to group posts. Such responses appear if a user explicitly tags @MetaAI in a post, or if the post includes a question and no other users have responded to it within an hour.[40] In January 2025, interest renewed in the theory following statements from Meta on their plans to introduce new AI powered autonomous accounts.[41]Connor Hayes, vice-president of product for generative AI at Meta stated, "We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do...They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform."[42] In the past, theRedditwebsite allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction.[27]In 2023, the companymoved to charge for access to its user dataset. Companies training AI are expected to continue to use this data for training future AI.[citation needed]As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts.[27]ProfessorToby Walsh, a computer scientist at the University of New South Wales, said in an interview withBusiness Insiderthat training the next generation of AI on content created by previous generations could cause the content to suffer.[27]University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory.[27] Since 2020, severalTwitteraccounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me".[2]These posts received tens of thousands of likes, many of which are suspected to be frombot accounts. Proponents of the dead internet theory have used these accounts as an example.[2][12] The proportion of Twitter accounts run by bots became a major issue duringElon Musk's acquisition of the company.[44][45][46][47]Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots.[44][48]Musk commissioned the company Cyabra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and another estimating 11%.[44]CounterAction, another firm commissioned by Musk, estimated 5.3% of accounts were bots.[49]Some bot accounts provide services, such as one noted bot that can provide stock prices when asked, while others troll, spread misinformation, or try to scam users.[48]Believers in the dead Internet theory have pointed to this incident as evidence.[50] In 2024,TikTokbegan discussing offering the use of virtual influencers to advertisement agencies.[15]In a 2024 article inFast Company, journalistMichael Grothauslinked this and other AI-generated content on social media to the Dead Internet Theory.[15]In this article, he referred to the content as "AI-slime".[15] OnYouTube, there is a market online for fake views to boost a video's credibility and reach broader audiences.[51]At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones.[51][2]YouTube engineers coined the term "the Inversion" to describe this phenomenon.[51][16][28]YouTube bots and the fear of "the Inversion" were cited as support for the dead Internet theory in a thread on the internet forum Melonland.[2] SocialAI, an app created on September 18, 2024 byMichael Sayman, was created with the full purpose of chatting with only AI bots without human interaction.[52]An article on theArs Technicawebsite linked SocialAI to the Dead Internet Theory.[52][53] The dead internet theory has been discussed among users of the social media platformTwitter. Users have noted that bot activity has affected their experience.[2]Numerous YouTube channels and online communities, including theLinus Tech Tipsforums andJoe Rogansubreddit, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse.[2]There has also been discussion and memes about this topic on the appTikTok, due to the fact thatAIgenerated content has become more mainstream.[attribution needed]
https://en.wikipedia.org/wiki/Dead_Internet_theory
Inalgebra, thezero-product propertystates that the product of twononzero elementsis nonzero. In other words,ifab=0,thena=0orb=0.{\displaystyle {\text{if }}ab=0,{\text{ then }}a=0{\text{ or }}b=0.} This property is also known as therule of zero product, thenull factor law, themultiplication property of zero, thenonexistence of nontrivialzero divisors, or one of the twozero-factor properties.[1]All of thenumber systemsstudied inelementary mathematics— theintegersZ{\displaystyle \mathbb {Z} }, therational numbersQ{\displaystyle \mathbb {Q} }, thereal numbersR{\displaystyle \mathbb {R} }, and thecomplex numbersC{\displaystyle \mathbb {C} }— satisfy the zero-product property. In general, aringwhich satisfies the zero-product property is called adomain. SupposeA{\displaystyle A}is an algebraic structure. We might ask, doesA{\displaystyle A}have the zero-product property? In order for this question to have meaning,A{\displaystyle A}must have both additive structure and multiplicative structure.[2]Usually one assumes thatA{\displaystyle A}is aring, though it could be something else, e.g. the set of nonnegative integers{0,1,2,…}{\displaystyle \{0,1,2,\ldots \}}with ordinary addition and multiplication, which is only a (commutative)semiring. Note that ifA{\displaystyle A}satisfies the zero-product property, and ifB{\displaystyle B}is a subset ofA{\displaystyle A}, thenB{\displaystyle B}also satisfies the zero product property: ifa{\displaystyle a}andb{\displaystyle b}are elements ofB{\displaystyle B}such thatab=0{\displaystyle ab=0}, then eithera=0{\displaystyle a=0}orb=0{\displaystyle b=0}becausea{\displaystyle a}andb{\displaystyle b}can also be considered as elements ofA{\displaystyle A}. SupposeP{\displaystyle P}andQ{\displaystyle Q}are univariate polynomials with real coefficients, andx{\displaystyle x}is a real number such thatP(x)Q(x)=0{\displaystyle P(x)Q(x)=0}. (Actually, we may allow the coefficients andx{\displaystyle x}to come from any integral domain.) By the zero-product property, it follows that eitherP(x)=0{\displaystyle P(x)=0}orQ(x)=0{\displaystyle Q(x)=0}. In other words, the roots ofPQ{\displaystyle PQ}are precisely the roots ofP{\displaystyle P}together with the roots ofQ{\displaystyle Q}. Thus, one can usefactorizationto find the roots of a polynomial. For example, the polynomialx3−2x2−5x+6{\displaystyle x^{3}-2x^{2}-5x+6}factorizes as(x−3)(x−1)(x+2){\displaystyle (x-3)(x-1)(x+2)}; hence, its roots are precisely 3, 1, and −2. In general, supposeR{\displaystyle R}is an integral domain andf{\displaystyle f}is amonicunivariate polynomial of degreed≥1{\displaystyle d\geq 1}with coefficients inR{\displaystyle R}. Suppose also thatf{\displaystyle f}hasd{\displaystyle d}distinct rootsr1,…,rd∈R{\displaystyle r_{1},\ldots ,r_{d}\in R}. It follows (but we do not prove here) thatf{\displaystyle f}factorizes asf(x)=(x−r1)⋯(x−rd){\displaystyle f(x)=(x-r_{1})\cdots (x-r_{d})}. By the zero-product property, it follows thatr1,…,rd{\displaystyle r_{1},\ldots ,r_{d}}are theonlyroots off{\displaystyle f}: any root off{\displaystyle f}must be a root of(x−ri){\displaystyle (x-r_{i})}for somei{\displaystyle i}. In particular,f{\displaystyle f}has at mostd{\displaystyle d}distinct roots. If howeverR{\displaystyle R}is not an integral domain, then the conclusion need not hold. For example, the cubic polynomialx3+3x2+2x{\displaystyle x^{3}+3x^{2}+2x}has six roots inZ6{\displaystyle \mathbb {Z} _{6}}(though it has only three roots inZ{\displaystyle \mathbb {Z} }).
https://en.wikipedia.org/wiki/Zero-product_property
Inmathematics, asingular perturbationproblem is a problem containing a small parameter that cannot be approximated by setting the parameter value to zero. More precisely, the solution cannot be uniformly approximated by anasymptotic expansion asε→0{\displaystyle \varepsilon \to 0}. Hereε{\displaystyle \varepsilon }is the small parameter of the problem andδn(ε){\displaystyle \delta _{n}(\varepsilon )}are a sequence of functions ofε{\displaystyle \varepsilon }of increasing order, such asδn(ε)=εn{\displaystyle \delta _{n}(\varepsilon )=\varepsilon ^{n}}. This is in contrast toregular perturbationproblems, for which a uniform approximation of this form can be obtained. Singularly perturbed problems are generally characterized by dynamics operating on multiple scales. Several classes of singular perturbations are outlined below. The term "singular perturbation" was coined in the 1940s byKurt Otto FriedrichsandWolfgang R. Wasow.[1] A perturbed problem whose solution can be approximated on the whole problem domain, whether space or time, by a singleasymptotic expansionhas aregular perturbation. Most often in applications, an acceptable approximation to a regularly perturbed problem is found by simply replacing the small parameterε{\displaystyle \varepsilon }by zero everywhere in the problem statement. This corresponds to taking only the first term of the expansion, yielding an approximation that converges, perhaps slowly, to the true solution asε{\displaystyle \varepsilon }decreases. The solution to a singularly perturbed problem cannot be approximated in this way: As seen in the examples below, a singular perturbation generally occurs when a problem's small parameter multiplies its highest operator. Thus naively taking the parameter to be zero changes the very nature of the problem. In the case of differential equations, boundary conditions cannot be satisfied; in algebraic equations, the possible number of solutions is decreased. Singular perturbation theory is a rich and ongoing area of exploration for mathematicians, physicists, and other researchers. The methods used to tackle problems in this field are many. The more basic of these include themethod of matched asymptotic expansionsandWKB approximationfor spatial problems, and in time, thePoincaré–Lindstedt method, themethod of multiple scalesandperiodic averaging. The numerical methods for solving singular perturbation problems are also very popular.[2] For books on singular perturbation in ODE and PDE's, see for example Holmes,Introduction to Perturbation Methods,[3]Hinch,Perturbation methods[4]orBenderandOrszag,Advanced Mathematical Methods for Scientists and Engineers.[5] Each of the examples described below shows how a naive perturbation analysis, which assumes that the problem is regular instead of singular, will fail. Some show how the problem may be solved by more sophisticated singular methods. Differential equations that contain a small parameter that premultiplies the highest order term typically exhibit boundary layers, so that the solution evolves in two different scales. For example, consider the boundary value problem Its solution whenε=0.1{\displaystyle \varepsilon =0.1}is the solid curve shown below. Note that the solution changes rapidly near the origin. If we naively setε=0{\displaystyle \varepsilon =0}, we would get the solution labelled "outer" below which does not model the boundary layer, for whichxis close to zero. For more details that show how to obtain the uniformly valid approximation, seemethod of matched asymptotic expansions. An electrically driven robot manipulator can have slower mechanical dynamics and faster electrical dynamics, thus exhibiting two time scales. In such cases, we can divide the system into two subsystems, one corresponding to faster dynamics and other corresponding to slower dynamics, and then design controllers for each one of them separately. Through a singular perturbation technique, we can make these two subsystems independent of each other, thereby simplifying the control problem. Consider a class of system described by the following set of equations: with0<ε≪1{\displaystyle 0<\varepsilon \ll 1}. The second equation indicates that the dynamics ofx2{\displaystyle x_{2}}is much faster than that ofx1{\displaystyle x_{1}}. A theorem due toTikhonov[6]states that, with the correct conditions on the system, it will initially and very quickly approximate the solution to the equations on some interval of time and that, asε{\displaystyle \varepsilon }decreases toward zero, the system will approach the solution more closely in that same interval.[7] Influid mechanics, the properties of a slightly viscous fluid are dramatically different outside and inside a narrowboundary layer. Thus the fluid exhibits multiple spatial scales. Reaction–diffusion systemsin which one reagent diffuses much more slowly than another can formspatial patternsmarked by areas where a reagent exists, and areas where it does not, with sharp transitions between them. Inecology, predator-prey models such as whereu{\displaystyle u}is the prey andv{\displaystyle v}is the predator, have been shown to exhibit such patterns.[8] Consider the problem of finding allrootsof the polynomialp(x)=εx3−x2+1{\displaystyle p(x)=\varepsilon x^{3}-x^{2}+1}. In the limitε→0{\displaystyle \varepsilon \to 0}, thiscubicdegenerates into thequadratic1−x2{\displaystyle 1-x^{2}}with roots atx=±1{\displaystyle x=\pm 1}. Substituting a regular perturbation series in the equation and equating equal powers ofε{\displaystyle \varepsilon }only yields corrections to these two roots: To find the other root, singular perturbation analysis must be used. We must then deal with the fact that the equation degenerates into a quadratic when we letε{\displaystyle \varepsilon }tend to zero, in that limit one of the roots escapes to infinity. To prevent this root from becoming invisible to the perturbative analysis, we must rescalex{\displaystyle x}to keep track with this escaping root so that in terms of the rescaled variables, it doesn't escape. We define a rescaled variabley=xεν{\displaystyle y=x\varepsilon ^{\nu }}where the exponentν{\displaystyle \nu }will be chosen such that we rescale just fast enough so that the root is at a finite value ofy{\displaystyle y}in the limit ofε{\displaystyle \varepsilon }to zero, but such that it doesn't collapse to zero where the other two roots will end up. In terms ofy{\displaystyle y}we have We can see that forν<1{\displaystyle \nu <1}they3{\displaystyle y^{3}}is dominated by the lower degree terms, while atν=1{\displaystyle \nu =1}it becomes as dominant as they2{\displaystyle y^{2}}term while they both dominate the remaining term. This point where the highest order term will no longer vanish in the limitε{\displaystyle \varepsilon }to zero by becoming equally dominant to another term, is called significant degeneration; this yields the correct rescaling to make the remaining root visible. This choice yields Substituting the perturbation series yields We are then interested in the root aty0=1{\displaystyle y_{0}=1}; the double root aty0=0{\displaystyle y_{0}=0}are the two roots that we've found above that collapse to zero in the limit of an infinite rescaling. Calculating the first few terms of the series then yields
https://en.wikipedia.org/wiki/Singular_perturbation
Incomplex analysis, aSchwarz–Christoffel mappingis aconformal mapof theupper half-planeor the complexunit diskonto the interior of asimple polygon. Such a map isguaranteed to existby theRiemann mapping theorem(stated byBernhard Riemannin 1851); the Schwarz–Christoffel formula provides an explicit construction. They were introduced independently byElwin Christoffelin 1867 andHermann Schwarzin 1869. Schwarz–Christoffel mappings are used inpotential theoryand some of its applications, includingminimal surfaces,hyperbolic art, andfluid dynamics. Consider a polygon in the complex plane. TheRiemann mapping theoremimplies that there is abiholomorphicmappingffrom the upper half-plane to the interior of the polygon. The functionfmaps the real axis to the edges of the polygon. If the polygon has interioranglesα,β,γ,…{\displaystyle \alpha ,\beta ,\gamma ,\ldots }, then this mapping is given by whereK{\displaystyle K}is aconstant, anda<b<c<⋯{\displaystyle a<b<c<\cdots }are the values, along the real axis of theζ{\displaystyle \zeta }plane, of points corresponding to the vertices of the polygon in thez{\displaystyle z}plane. A transformation of this form is called aSchwarz–Christoffel mapping. The integral can be simplified by mapping thepoint at infinityof theζ{\displaystyle \zeta }plane to one of the vertices of thez{\displaystyle z}plane polygon. By doing this, the first factor in the formula becomes constant and so can be absorbed into the constantK{\displaystyle K}. Conventionally, the point at infinity would be mapped to the vertex with angleα{\displaystyle \alpha }. In practice, to find a mapping to a specific polygon one needs to find thea<b<c<⋯{\displaystyle a<b<c<\cdots }values which generate the correct polygon side lengths. This requires solving a set of nonlinear equations, and in most cases can only be donenumerically.[1] Consider a semi-infinite strip in thezplane. This may be regarded as a limiting form of atrianglewith verticesP= 0,Q= πi, andR(withRreal), asRtends to infinity. Nowα = 0andβ = γ =π⁄2in the limit. Suppose we are looking for the mappingfwithf(−1) =Q,f(1) =P, andf(∞) =R. Thenfis given by Evaluation of this integral yields whereCis a (complex) constant of integration. Requiring thatf(−1) =Qandf(1) =PgivesC= 0andK= 1. Hence the Schwarz–Christoffel mapping is given by This transformation is sketched below. A mapping to a planetrianglewith interior anglesπa,πb{\displaystyle \pi a,\,\pi b}andπ(1−a−b){\displaystyle \pi (1-a-b)}is given by which can be expressed in terms ofhypergeometric functionsor incompletebeta functions. The upper half-plane is mapped to a triangle with circular arcs for edges by theSchwarz triangle map. The upper half-plane is mapped to the square by whereFis the incompleteelliptic integralof the first kind. An analogue of SC mapping that works also for multiply-connected is presented in:Case, James (2008),"Breakthrough in Conformal Mapping"(PDF),SIAM News,41(1).
https://en.wikipedia.org/wiki/Schwarz%E2%80%93Christoffel_mapping
Instatistics,probable errordefines thehalf-rangeof an interval about acentral pointfor the distribution, such that half of the values from the distribution will lie within the interval and half outside.[1]Thus for asymmetric distributionit is equivalent to half theinterquartile range, or themedian absolute deviation. One such use of the termprobable errorin this sense is as the name for thescale parameterof theCauchy distribution, which does not have a standard deviation. The probable error can also be expressed as a multiple of the standard deviation σ,[1][2]which requires that at least the secondstatistical momentof the distribution should exist, whereas the other definition does not. For anormal distributionthis isγ=0.6745×σ{\displaystyle \gamma =0.6745\times \sigma }(seedetails)
https://en.wikipedia.org/wiki/Probable_error
Anelectronic health record(EHR) is the systematized collection of electronically stored patient and population health information in a digital format.[1]These records can be shared across differenthealth caresettings. Records are shared through network-connected, enterprise-wideinformation systemsor other information networks and exchanges. EHRs may include a range of data, includingdemographics, medical history,medicationandallergies,immunizationstatus, laboratory test results,radiologyimages,vital signs, personal statistics like age and weight, and billing information.[2] For several decades, EHRs have been touted as key to increasing quality of care.[3]EHR combines all patients' demographics into a large pool, which assists providers in the creation of "new treatments or innovation in healthcare delivery" to improve quality outcomes in healthcare.[4]Combining multiple types of clinical data from the system's health records has helped clinicians identify and stratify chronically ill patients. EHR can also improve quality of care through the use of data and analytics to prevent hospitalizations among high-risk patients. EHR systems are designed to store data accurately and to capture a patient's state across time. It eliminates the need to track down a patient's previous papermedical recordsand assists in ensuring data is up-to-date,[5]accurate, and legible. It also allows open communication between the patient and the provider while providing "privacy and security."[5]EHR is cost-efficient, decreases the risk of lost paperwork, and can reduce risk of data replication as there is only one modifiable file, which means the file is more likely up to date.[5]Due to the digital information being searchable and in a single file, EMRs (electronic medical records) are more effective when extracting medical data to examine possible trends and long-term changes in a patient. The widespread adoption of EHRs and EMRs may also facilitate population-based studies of medical records. The terms EHR, electronic patient record (EPR), andelectronic medical record(EMR) have often been used interchangeably, but "subtle" differences exist.[6]The electronic health record (EHR) is a more longitudinal collection of the electronic health information of individual patients or populations. The EMR, in contrast, is the patient record created by providers for specific encounters in hospitals and ambulatory environments and can serve as a data source for an EHR.[7][8] EMRs are essentially digital versions of the paper documents used in a clinician’s office, typically functioning as an internal system within a practice. An EMR includes the medical and treatment history of patients treated by that specific practice.[9] In contrast, apersonal health record(PHR) is an electronic application for recording individual medical data that the individual patient controls and may make available to health providers.[10] While there is still considerable debate around the superiority of electronic health records over paper records, the research literature paints a more realistic picture of the benefits and downsides.[11] The increased transparency, portability, and accessibility acquired by the adoption of electronic medical records may increase the ease with which they can be accessed byhealthcare professionals, but also can increase the amount of stolen information by unauthorized persons or unscrupulous users versus paper medical records, as acknowledged by the increased security requirements for electronic medical records included in theHealth Insurance Portability and Accountability Act (HIPAA)and by large-scale breaches in confidential records reported by EMR users.[12][13]Concerns about security contribute to the resistance shown to their adoption.[weasel words] Handwritten paper medical records may be poorly legible, which can contribute tomedical errors.[14]Pre-printed forms, standardization of abbreviations, and standards for penmanship were encouraged to improve the reliability of paper medical records. An example of possible medical errors is the administration of medication. Medication is an intervention that can turn a person's status from stable to unstable very quickly. With paper documentation it is very easy to not properly document the administration of medication, the time given, or errors such as giving the "wrong drug, dose, form, or not checking for allergies," and could affect the patient negatively. It has been reported that these errors have been reduced by "55-83%" because records are now online and require specific steps to avoid these errors.[15] Electronic records may help with the standardization of forms, terminology, and data input.[16][17]Digitization of forms facilitates the collection of data forepidemiologyand clinical studies.[18][19]However, standardization may create challenges for local practice.[11]Overall, those with EMRs that have automated notes and records, order entry, and clinical decision support had fewer complications, lower mortality rates, and lower costs.[20] EMRs can be continuously updated (within certain legal limitations: see below). If the ability to exchange records between different EMR systems were perfected ("interoperability"[21]), it would facilitate the coordination of health care delivery in non-affiliatedhealth care facilities. In addition, data from an electronic system can be used anonymously for statistical reporting in matters such as quality improvement, resource management, andpublic healthcommunicable disease surveillance.[22]However, it is difficult to remove data from its context.[11] Providing patients with information is central topatient-centered health careand has been shown to positively affect health outcomes.[23]Providing patients access to their health records, including medical histories and test results via an EHR, is a legal right in some parts of the world.[23] There is evidence that patient access may help patients understand their conditions and actively involve them in their management. For example, granting people who havetype 2 diabetesaccess to their electronic health records may help these people to reduce theirblood sugar levels.[24][25][26] Challenges with sharing the electronic health record with patients include a risk of increased confusion or anxiety if a person does not understand or cannot contextualize the testing results.[23]In addition, many EHRs are not designed for people of all educational levels and do not consider the needs of those with a lower level of education or those who are not fluent in the language.[23]Accessing the EHR requires a level of proficiency with electronic devices, which adds to a disparity for those without access or for those who have a mental or physical illness that restricts their access to the electronic system.[23] Electronic medical records could also be studied to quantifydisease burdens– such as the number of deaths fromantimicrobial resistance[27]– or help identify causes of, factors of,links between,[28][29]and contributors to diseases,[30][31][32]especially when combined withgenome-wide association studies.[33][34] This may enable increased flexibility, improveddisease surveillance, better medical product safety surveillance,[35]betterpublic health monitoring(such as for evaluation ofhealth policyeffectiveness),[36][37]increasedquality of care(via guidelines[38]and improved medical history sharing[39][40]), and novel life-saving treatments. Privacy: For such purposes, electronic medical records could potentially be made available in securely anonymized or pseudonymized[41]forms to ensurepatients' privacyis maintained,[42][34][43][44]even ifdata breachesoccur. There are concerns about the efficacy of some currently appliedpseudonymizationand data protection techniques, including the appliedencryption.[45][39] Documentation burden: While such records could enable avoiding duplication of work via records-sharing,[39][40]documentationburdens for medical facility personnel can be a further issue with EHRs. This burden could be reduced viavoice recognition,optical character recognition, other technologies, physician involvement in software changes, and other means[40][46][47][48]which could possibly reduce the documentation burden to below paper-based records documentation and low-level documentation. Theoretically,free softwaresuch asGNU Healthandother open-source health softwarecould be used or modified for various purposes that use electronic medical records, i.e., via securely sharing anonymized patient treatments, medical history, and individual outcomes (including by common primary care physicians).[49] Ambulance services in Australia, the United States, and the United Kingdom have introduced EMR systems.[62][63]EMS Encounters in the United States are recorded using various platforms and vendors in compliance with the NEMSIS (National EMS Information System) standard.[64]The benefits of electronic records in ambulances include patient data sharing, injury/illness prevention, better training for paramedics, review of clinical standards, better research options for pre-hospital care and design of future treatment options, data-based outcome improvement, and clinical decision support.[65] EHRs enable health information to be used and shared over secure networks to: Using an EMR to read and write a patient's record is not only possible through a workstation but, depending on the type of system and health care settings, may also be possible through mobile devices that are handwriting capable,[67]such as tablets and smartphones. Electronic medical records may include access topersonal health records(PHR) which makes individual notes from an EMR readily visible and accessible to consumers.[citation needed] Some EMR systems automatically monitor clinical events by analyzing patient data from an electronic health record to predict, detect, and potentially prevent adverse events. This can include discharge/transfer orders, pharmacy orders, radiology results, laboratory results, and any other data from ancillary services or provider notes.[68]This type of event monitoring has been implemented using the Louisiana Public Health Information Exchange, which links statewide public health with electronic medical records. This system alerted medical providers when a patient with HIV/AIDS had not received care in over twelve months. This system greatly reduced the number of missed critical opportunities.[69] Within a meta-narrativesystematic reviewof research in the field, various different philosophical approaches to the EHR exist.[11]The health information systems literature has seen the EHR as a container holding information about the patient and a tool for aggregating clinical data for secondary uses (billing, audit, etc.). However, other research traditions see the EHR as a contextualized artifact within a socio-technical system. For example,actor-network theorywould see the EHR as an actant in a network,[70]and research incomputer-supported cooperative work(CSCW) sees the EHR as a tool supporting particular work. Several possible advantages to EHRs over paper records have been proposed, but there is debate about the degree to which these are achieved in practice.[71] Several studies call into question whether EHRs improve the quality of care.[11][72][73][74][75]One 2011 study in diabetes care, published in theNew England Journal of Medicine, found evidence that practices with EHR provided better quality care.[76] EMRs may eventually help improve care coordination. An article in a trade journal suggests that since anyone using an EMR can view the patient's full chart, it cuts down on guessing histories and seeing multiple specialists, smooths transitions between care settings, and may allow better care in emergency situations.[77]EHRs may also improve prevention by providing doctors and patients better access to test results, identifying missing patient information, and offering evidence-based recommendations for preventive services.[78] The steep price and provider uncertainty regarding the value they will derive from adoption in the form ofreturn on investmentsignificantly influences EHR adoption.[79]In a project initiated by theOffice of the National Coordinator for Health Information, surveyors found that hospital administrators and physicians who had adopted EHR noted that any gains in efficiency were offset by reduced productivity as the technology was implemented, as well as the need to increase information technology staff to maintain the system.[79] TheU.S. Congressional Budget Officeconcluded that the cost savings may occur only in large integrated institutions like Kaiser Permanente and not in small physician offices. They challenged theRand Corporation's estimates of savings. "Office-based physicians in particular may see no benefit if they purchase such a product—and may even suffer financial harm. Even though the use of health IT could generate cost savings for the health system at large that might offset the EHR's cost, many physicians might not be able to reduce their office expenses or increase their revenue sufficiently to pay for it. For example, the use of health IT could reduce the number of duplicated diagnostic tests. However, that improvement in efficiency would be unlikely to increase the income of many physicians."[80] One CEO of an EHR company has argued if a physician performs tests in the office, it might reduce his or her income.[81] Doubts have been raised about cost saving from EHRs by researchers atHarvard University, theWharton School of the University of Pennsylvania,Stanford University, and others.[75][82][83] In 2022, the chief executive ofGuy's and St Thomas' NHS Foundation Trust, one of the biggest NHS organisations, said that the £450 million cost over 15 years to install theEpic Systemselectronic patient record across its six hospitals, which will reduce more than 100 different IT systems down to just a handful, was "chicken feed" when compared to the NHS's overall budget.[84] The implementation of EMR can potentially decrease the identification time of patients upon hospital admission. Research by theAnnals of Internal Medicineshowed that since the adoption of EMR, a relative decrease in time by 65% has been recorded (from 130 to 46 hours).[85] TheHealthcare Information and Management Systems Society, a very large U.S. healthcare IT industry trade group, observed in 2009 that EHR adoption rates "have been slower than expected in the United States, especially compared to other industry sectors and other developed countries. Aside from initial costs and lost productivity during EMR implementation, one key reason is lack of efficiency and usability of EMRs currently available."[86][87]The U.S.National Institute of Standards and Technologyof theDepartment of Commercestudied usability in 2011 and lists a number of specific issues that have been reported by health care workers.[88]The U.S. military's EHR,AHLTA, was reported to have significant usability issues.[89]Furthermore, studies such as the one conducted in BMC Medical Informatics and Decision Making showed that although the implementation of electronic medical records systems has been a great assistance togeneral practitioners, there is still much room for revision in the overall framework and the amount of training provided.[90]It was observed that the efforts to improve EHR usability should be placed in the context of physician-patient communication.[91] However, physicians are embracing mobile technologies such as smartphones and tablets at a rapid pace. According to a 2012 survey byPhysicians Practice, 62.6 percent of respondents (1,369 physicians, practice managers, and other healthcare providers) say they use mobile devices in the performance of their job. Mobile devices are increasingly able to sync up with electronic health record systems, allowing physicians to access patient records from remote locations. Most devices are extensions of desktop EHR systems, using a variety of software to communicate and access files remotely. The advantages of instant access to patient records at any time and place are clear, but raise security concerns. As mobile systems become more prevalent, practices will need comprehensive policies that govern security measures and patient privacy regulations.[92] Other advanced computational techniques allow EHRs to be evaluated at a much quicker rate.Natural language processingis increasingly used to search EMRs, especially through searching and analyzing notes and text that would otherwise be inaccessible for study when seeking to improve care.[93]One study found that several machine learning methods could be used to predict the rate of a patient's mortality with moderate success, with the most successful approach including using a combination of aconvolutional neural networkand a heterogenous graph model.[94] When a health facility has documented its workflow and chosen its software solution, it must consider the hardware and supporting device infrastructure for the end users. Staff and patients must engage with various devices throughout a patient's stay and charting workflow. Computers, laptops, all-in-one computers, tablets, mouse, keyboards and monitors are all hardware devices that may be utilized. Other considerations include supporting work surfaces and equipment, wall desks or articulating arms for end users to work on. Another important factor is how all these devices will be physically secured and how they will be charged so that staff can always utilize them for EHR charting when needed. The success of eHealth interventions largely depends on the adopter's ability to fully understand workflow and anticipate potential clinical processes prior to implementations. Failure to do so can create costly and time-consuming interruptions to service delivery.[95] Per empirical research insocial informatics,information and communications technology(ICT) use can lead to both intended andunintended consequences.[96][97][98] A 2008 Sentinel Event Alert from the U.S.Joint Commission, the organization that accredits American hospitals to provide healthcare services, states, "As health information technology (HIT) and 'converging technologies'—the interrelationship between medical devices and HIT—are increasingly adopted by health care organizations, users must be mindful of the safety risks and preventable adverse events that these implementations can create or perpetuate. Technology-related adverse events can be associated with all components of a comprehensive technology system and may involve errors of either commission or omission. These unintended adverse events typically stem from human-machine interfaces or organization/system design."[99]The Joint Commission cites as an example theUnited States PharmacopeiaMEDMARX database,[100]where of 176,409 medication error records for 2006, approximately 25 percent (43,372) involved some aspect of computer technology as at least one cause of the error. The BritishNational Health Service(NHS) reports specific examples of potential and actual EHR-caused unintended consequences in its 2009 document on the management of clinical risk relating to the deployment and use of health software.[101] In February 2010, an AmericanFood and Drug Administration(FDA) memorandum noted that EHR unintended consequences include EHR-related medical errors from (1) errors of commission (EOC), (2) errors of omission or transmission (EOT), (3) errors in data analysis (EDA), and (4) incompatibility between multi-vendor software applications or systems (ISMA), citing various examples. The FDA also noted that the "absence of mandatory reporting enforcement of H-IT safety issues limits the numbers of medical device reports (MDRs) and impedes a more comprehensive understanding of the actual problems and implications."[102][103] A 2010 Board Position Paper by theAmerican Medical Informatics Association(AMIA) contains recommendations on EHR-related patient safety, transparency, ethics education for purchasers and users, adoption of best practices, and re-examination of regulation of electronic health applications.[104]Beyond concrete issues such as conflicts of interest and privacy concerns, questions have been raised about how the physician-patient relationship would be affected by an electronic intermediary.[105][106] During the implementation phase,cognitive workloadfor healthcare professionals may be significantly increased as they familiarize themselves with a new system.[107] EHRs are almost invariably detrimental to physician productivity, whether the data is entered during the encounter or sometime thereafter.[108]It is possible for an EHR to increase physician productivity[109]by providing a fast and intuitive interface for viewing and understanding patient clinical data and minimizing the number of clinically irrelevant questions,[citation needed]but that is almost never the case.[citation needed]The other way to mitigate the detriment to physician productivity is to hire scribes to work alongside medical practitioners, which is almost never financially viable.[citation needed] As a result, many have conducted studies like the one discussed in theJournal of the American Medical Informatics Association, "The Extent And Importance of Unintended Consequences Related To Computerized Provider Order Entry," which seeks to understand the degree and significance of unplanned adverse consequences related to computerized physician order entry and understand how to interpret adverse events and understand the importance of its management for the overall success of computer physician order entry.[110] In the United States, Great Britain, and Germany, the concept of a national centralized server model of healthcare data has been poorly received.[111]Concerns include issues of privacy and security.[112][113] In theEuropean Union(EU), a new directly binding instrument, a regulation of theEuropean Parliamentand of the council, was passed in 2016 to go into effect in 2018 to protect the processing of personal data, including that for purposes of health care, theGeneral Data Protection Regulation. Threats to health care information can be categorized under three headings: These threats can either be internal, external, intentional, or unintentional. Health information systems professionals consider these particular threats when discussing ways to protect patients' health information. It has been found that there is a lack of security awareness among health care professionals in countries such as Spain.[114]TheHealth Insurance Portability and Accountability Act(HIPAA) has developed a framework to mitigate the harm of these threats that is comprehensive but not so specific as to limit the options of healthcare professionals who may have access to different technology.[115]With the increase of clinical notes being shared electronically due to the21st Century Cures Act, an increase in sensitive terms used across the records of all patients, including minors, are increasingly shared amongst care teams, complicating efforts to maintain privacy.[116] Personal Information Protection and Electronic Documents Act(PIPEDA) was given Royal Assent in Canada on 13 April 2000 to establish rules on the use, disclosure, and collection of personal information. The personal information includes both non-digital and electronic forms. In 2002, PIPEDA extended to the health sector in Stage 2 of the law's implementation.[117]There are four provinces where this law does not apply because their privacy laws were considered similar to PIPEDA: Alberta, British Columbia, Ontario, and Quebec. TheCOVID-19 pandemic in the United Kingdomled to radical changes.NHS DigitalandNHSXmade changes, said to be only for the duration of the crisis, to the information sharing system GP Connect across England, meaning that patient records are shared across primary care. Only patients who have specifically opted out are excluded.[118] Legal liability in all aspects of health care was an increasing problem in the 1990s and 2000s. The surge in the per capita number of attorneys in the USA[119]and changes in thetortsystem caused an increase in the cost of every aspect of health care, and health care technology was no exception.[120] Failure or damages caused during installation or utilization of an EHR system has been feared as a threat in lawsuits.[121]Similarly, the implementation of electronic health records can carry significant legal risks.[122] Liability is of special concern for small EHR system makers, which may be forced to abandon markets based on the regional liability climate.[123][unreliable source]Larger EHR providers (or government-sponsored providers of EHRs) are better able to withstand legal challenges. Electronic documentation of patient visits and data could open physicians to an increased incidence ofmalpracticesuits. Disabling physician alerts, selecting from dropdown menus, and using templates can encourage physicians to skip a complete review of past patient history and medications and thus miss important data. Another potential problem is electronic time stamps. Many physicians are unaware that EHR systems produce an electronic time stamp every time the patient record is updated. If a malpractice claim goes to court, the prosecution can request a detailed record of all entries made in a patient's electronic record. Waiting to chart patient notes until the end of the day and making addendums to records well after the patient visit can be problematic in that this practice could result in less than accurate patient data or indicate possible intent to illegally alter the patient's record.[124] In some communities, hospitals attempt to standardize EHR systems by providing discounted versions of the hospital's software to local healthcare providers. A challenge to this practice has been raised as being a violation of Stark rules that prohibit hospitals from preferentially assisting community healthcare providers.[125]In 2006, however, exceptions to the Stark rule were enacted to allow hospitals to furnish software and training to community providers, mostly removing this legal obstacle.[126][unreliable source][127][unreliable source] In cross-border use cases of EHR implementations, the additional issue of legal interoperability arises. Different countries may have diverging legal requirements for the content or usage of electronic health records, which can require radical changes to the technical makeup of the EHR implementation in question, especially when fundamental legal incompatibilities are involved. Exploring these issues is, therefore, often necessary when implementing cross-border EHR solutions.[128] TheUnited NationsWorld Health Organization(WHO) administration intentionally does not contribute to an internationally standardized view of medical records nor to personal health records. However, the WHO contributes to minimum requirements definitions for developing countries.[129] The United Nations-accredited standardization bodyInternational Organization for Standardization(ISO) however has reviewed and adopted certain standards in the scope of theHL7platform for health care informatics. Respective standards are available with ISO/HL7 10781:2009 Electronic Health Record-System Functional Model, Release 1.1[130]and subsequent set of detailing standards.[131] The majority of the countries in Europe have made a strategy for the development and implementation of electronic health record systems. This would mean greater access to health records by numerous stakeholders, even from countries with lower levels of privacy protection. The implementation of the Cross-Border Health Directive and theEuropean Commission's plans to centralize all health records are of prime concern to the EU public who believe that the health care organizations and governments cannot be trusted to manage their data electronically and expose them to more threats. The idea of a centralized electronic health record system was poorly received by the public who are wary that governments may use of the system beyond its intended purpose. There is also the risk for privacy breaches that could allow sensitive health care information to fall into the wrong hands. Some countries have enacted laws requiring safeguards to be put in place to protect the security and confidentiality of medical information. These safeguards add protection for records that are shared electronically and give patients some important rights to monitor their medical records and receive notification for loss and unauthorized acquisition of health information. The United States and the EU have imposed mandatorymedical data breachnotifications.[132] The purpose of a personal data breach notification is to protect individuals so that they can take all the necessary actions to limit the undesirable effects of the breach and to motivate the organization to improve the security of the infrastructure to protect the confidentiality of the data. U.S. law requires the entities to inform the individuals in the event of a breach while the EU Directive currently requires breach notification only when the breach is likely to adversely affect the privacy of the individual. Personal health data is valuable to individuals and it is therefore difficult to assess whether a breach will cause reputational or financial harm or adversely affect one's privacy. The breach notification law in the EU provides better privacy safeguards with fewer exemptions, unlike the US law, which exempts unintentional acquisition, access, or use of protected health information and inadvertent disclosure under a good faith belief.[132] The U.S. federal government has issued new rules of electronic health records.[133] Acommon data model(CDM) is a specification that describes how data from multiple sources (e.g., multiple EHR systems) can be combined. Many CDMs use a relational model (e.g., the OMOP CDM). A relational CDM defines names of tables and table columns and restricts what values are valid. Each health care environment functions differently, often in significant ways. It is difficult to create a "one-size-fits-all" EHR system. Many first-generation EHRs were designed to fit the needs of primary care physicians, leaving certain specialties significantly less satisfied with their EHR system.[citation needed] An ideal EHR system will have record standardization but also interfaces that can be customized to each provider environment. Modularity in an EHR system facilitates this. Many EHR companies employ vendors to provide customization, which can often be done so that a physician's input interface closely mimics previously utilized paper forms.[135] Providers have reported negative effects in communication, increased overtime, and missing records when a non-customized EMR system was utilized.[136]Customizing the software when released yields the highest benefits because it is adapted for the users and tailored to workflows specific to the institution.[137] However, customization can have its disadvantages. Implementing a customized system may incur higher initial costs, as more time must be spent by both the implementation team and the healthcare provider to understand the workflow needs. Development and maintenance of these interfaces and customizations can also lead to higher software implementation and maintenance costs.[138][unreliable source][139][unreliable source] An important consideration when developing electronic health records is to plan for the long-term preservation and storage of these records. The field will need to come to a consensus on the length of time to store EHRs, methods to ensure the future accessibility and compatibility of archived data with yet-to-be-developed retrieval systems, and how to ensure the physical and virtual security of the archives.[citation needed] Additionally, considerations about the long-term storage of electronic health records are complicated by the possibility that the records might one day be used longitudinally and integrated across sites of care. Records have the potential to be created, used, edited, and viewed by multiple independent entities. These entities include, but are not limited to,primary care physicians,hospitals,insurance companies, andpatients. Mandl et al. have noted that "choices about the structure and ownership of these records will have profound impact on the accessibility and privacy of patient information."[140] The required length of storage of an individual electronic health record will depend on national and state regulations, which are subject to change over time.[141]Ruotsalainen and Manning have found that the typical preservation time of patient data varies between 20 and 100 years. In one example of how an EHR archive might function, their research "describes a co-operative trusted notary archive (TNA) which receives health data from different EHR-systems, stores data together with associated meta-information for long periods and distributes EHR-data objects. TNA can store objects in XML-format and prove the integrity of stored data with the help of event records, timestamps and archive e-signatures."[142] In addition to the TNA archive described by Ruotsalainen and Manning, other combinations of EHR systems and archive systems are possible. Again, overall requirements for the design and security of the system and its archive will vary and must function under ethical and legal principles specific to the time and place.[citation needed] While it is currently unknown precisely how long EHRs will be preserved, it is certain that length of time will exceed the average shelf-life of paper records. The evolution of technology is such that the programs and systems used to input information will likely not be available to a user who desires to examine archived data. One proposed solution to the challenge of long-term accessibility and usability of data by future systems is to standardize information fields in a time-invariant way, such as withXML language. Olhede and Peterson report that "the basic XML-format has undergone preliminary testing in Europe by a Spri project and been found suitable for EU purposes. Spri has advised the Swedish National Board of Health and Welfare and the Swedish National Archive to issue directives concerning the use of XML as the archive-format for EHCR (Electronic Health Care Record) information."[143] When care is provided at two different facilities, it may be difficult to update records at both locations in a coordinated fashion. Two models have been used to satisfy this problem: acentralized data server solutionand a peer-to-peerfile synchronizationprogram (as has been developed for otherpeer-to-peer networks). However, synchronization programs for distributed storage models are only useful once record standardization has occurred. Merging of already existing public health care databases is a common software challenge. The ability of electronic health record systems to provide this function is a key benefit and can improve health care delivery.[144][145][146] The sharing of patient information between health care organizations and IT systems is changing from a "point to point" model to a "many to many" one. The European Commission is supporting moves to facilitate cross-border interoperability of e-health systems and to remove potential legal hurdles. To allow for global shared workflow, studies will be locked when they are being read and then unlocked and updated once reading is complete. This enables Radiologists to serve multiple health care facilities and read and report across large geographical areas, thus balancing workloads. The biggest challenges will relate to interoperability and legal clarity. In some countries, it is almost forbidden to practice teleradiology. The variety of languages spoken is a problem, and multilingual reporting templates for all anatomical regions are not yet available. However, the market for e-health and teleradiology is evolving more rapidly than any laws or regulations.[147] SeeElectronic health records in the United States In 2011, Moscow's government launched a major project known asUMIASas part of its electronic healthcare initiative. UMIAS - the Unified Medical Information and Analytical System - connects more than 660 clinics and over 23,600 medical practitioners in Moscow. UMIAS covers 9.5 million patients, contains more than 359 million patient records, and supports more than 500,000 different transactions daily. Approximately 700,000 Muscovites use remote links to make appointments every week.[148][149] TheEuropean Commissionwants to boost the digital economy by enabling all Europeans to have access to online medical records anywhere in Europe. With the newEuropean Health Data Space (EHDS) Regulation, steps are being taken toward a centralized European health record system. However, the concept of a centralized supranational central server raises concern about storing electronic medical records in a central location. The privacy threat posed by a supranational network is a key concern. Cross-border andinteroperableelectronic health record systems make confidential data more easily and rapidly accessible to a wider audience and increase the risk that personal data concerning health could be accidentally exposed or easily distributed to unauthorized parties by enabling greater access to a compilation of the personal data concerning health, from different sources, and throughout a lifetime.[150] The Lloyd George envelope digitisation project aims to have all paper copies of all historic patient data transferred onto computer systems. As part of the rollout, new patients will no longer be given a transit label to register when moving practices. Not only is it a step closer to a digitalNHS, the project reduces the movement of records between practices, freeing up space in practices that are used to store records as well as having the added benefit of being more environmentally friendly[151] Lyniatewas selected to provide data integration technologies forHealth and Social Care (Northern Ireland)in 2022.Epic Systemswill supply integrated electronic health records with a single digital record for every citizen. Lyniate Rhapsody, already used in 79 NHS Trusts, will be used to integrate the multiple health and social care systems.[152] In UKveterinarypractice, the replacement of paper recording systems with electronic methods of storing animal patient information escalated from the 1980s, and the majority of clinics now use electronic medical records. In a sample of 129 veterinary practices, 89% used aPractice Management System (PMS)for data recording.[153]There are more than ten PMS providers currently in the UK. Collecting data directly from PMSs for epidemiological analysis abolishes the need for veterinarians to manually submit individual reports per animal visit and therefore increases the reporting rate.[154] Veterinary electronic medical record data are being used to investigate antimicrobial efficacy, risk factors forcanine cancer, and inherited diseases in dogs and cats in the small animal disease surveillance project'VetCOMPASS'(Veterinary Companion Animal Surveillance System) at theRoyal Veterinary College, London, in collaboration with theUniversity of Sydney(the VetCOMPASS project was formerly known as VEctAR).[155][156] A letter published in Communications of the ACM[157]describes the concept of generating synthetic patient populations and proposes a variation of theTuring testto assess the difference between synthetic and real patients. The letter states: "In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own?" Further, the letter states: "Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value. Any new techniques must thus consider patients' heterogeneity and are likely to have greater complexity than the Allen eighth-grade-science-test is able to grade."[158]
https://en.wikipedia.org/wiki/Electronic_medical_record
In mathematics, theErdős–Ko–Rado theoremlimits the number ofsetsin afamily of setsfor which every two sets have at least one element in common.Paul Erdős,Chao Ko, andRichard Radoproved the theorem in 1938, but did not publish it until 1961. It is part of the field ofcombinatorics, and one of the central results ofextremal set theory.[1] The theorem applies to families of sets that all have the samesize,r{\displaystyle r},and are all subsets of some larger set of sizen{\displaystyle n}.One way to construct a family of sets with these parameters, each two sharing an element, is to choose a single element to belong to all the subsets, and then form all of the subsets that contain the chosen element. The Erdős–Ko–Rado theorem states that whenn{\displaystyle n}is large enough for the problem to be nontrivial(n≥2r{\displaystyle n\geq 2r})this construction produces the largest possible intersecting families. Whenn=2r{\displaystyle n=2r}there are other equally-large families, but for larger values ofn{\displaystyle n}only the families constructed in this way can be largest. The Erdős–Ko–Rado theorem can also be described in terms ofhypergraphsorindependent setsinKneser graphs. Several analogous theorems apply to other kinds of mathematical object than sets, includinglinear subspaces,permutations, andstrings. They again describe the largest possible intersecting families as being formed by choosing an element and forming the family of all objects that contain the chosen element. Suppose thatA{\displaystyle {\mathcal {A}}}is a family of distinctr{\displaystyle r}-elementsubsetsof ann{\displaystyle n}-elementsetwithn≥2r{\displaystyle n\geq 2r},and that each two subsets share at least one element. Then the theorem states that the number of sets inA{\displaystyle {\mathcal {A}}}is at most thebinomial coefficient(n−1r−1).{\displaystyle {\binom {n-1}{r-1}}.}The requirement thatn≥2r{\displaystyle n\geq 2r}is necessary for the problem to be nontrivial:whenn<2r{\displaystyle n<2r},allr{\displaystyle r}-elementsets intersect, and the largest intersecting family consists of allr{\displaystyle r}-elementsets, withsize(nr){\displaystyle {\tbinom {n}{r}}}.[2] The same result can be formulated as part of the theory ofhypergraphs. A family of sets may also be called a hypergraph, and when all the sets (which are called "hyperedges" in this context) are the samesizer{\displaystyle r},it is called anr{\displaystyle r}-uniformhypergraph. The theorem thus gives an upper bound for the number of pairwise overlapping hyperedges in anr{\displaystyle r}-uniformhypergraph withn{\displaystyle n}verticesandn≥2r{\displaystyle n\geq 2r}.[3] The theorem may also be formulated in terms ofgraph theory: theindependence numberof theKneser graphKGn,r{\displaystyle KG_{n,r}}forn≥2r{\displaystyle n\geq 2r}isα(KGn,r)=(n−1r−1).{\displaystyle \alpha (KG_{n,r})={\binom {n-1}{r-1}}.}This is a graph with a vertex for eachr{\displaystyle r}-elementsubset of ann{\displaystyle n}-elementset, and an edge between every pair ofdisjoint sets. Anindependent setis a collection of vertices that has no edges between its pairs, and the independence number is the size of the largestindependent set.[4]Because Kneser graphs have symmetries taking any vertex to any other vertex (they arevertex-transitive graphs), theirfractional chromatic numberequals the ratio of their number of vertices to their independence number, so another way of expressing the Erdős–Ko–Rado theorem is that these graphs have fractional chromatic numberexactlyn/r{\displaystyle n/r}.[5] Paul Erdős,Chao Ko, andRichard Radoproved this theorem in 1938 after working together on it in England. Rado had moved from Berlin to theUniversity of Cambridgeand Erdős from Hungary to theUniversity of Manchester, both escaping the influence of Nazi Germany; Ko was a student ofLouis J. MordellatManchester.[6]However, they did not publish the resultuntil 1961,[7]with the long delay occurring in part because of a lack of interest in combinatorial set theory in the 1930s, and increased interest in the topic inthe 1960s.[6]The 1961 paper stated the result in an apparently more general form, in which the subsets were only required to be sizeat mostr{\displaystyle r},and to satisfy the additional requirement that no subset be contained in anyother.[7]A family of subsets meeting these conditions can be enlarged to subsets of sizeexactlyr{\displaystyle r}either by an application ofHall's marriage theorem,[8]or by choosing each enlarged subset from the same chain in a symmetricchain decompositionof sets.[9] A simple way to construct an intersecting family ofr{\displaystyle r}-elementsets whose size exactly matches the Erdős–Ko–Rado bound is to choose any fixedelementx{\displaystyle x},and letA{\displaystyle {\mathcal {A}}}consist of allr{\displaystyle r}-elementsubsets thatincludex{\displaystyle x}.For instance, for 2-element subsets of the 4-elementset{1,2,3,4}{\displaystyle \{1,2,3,4\}},withx=1{\displaystyle x=1},this produces the family Any two sets in this family intersect, because they bothinclude1{\displaystyle 1}.The number of sets is(n−1r−1){\displaystyle {\tbinom {n-1}{r-1}}}, because after the fixed element is chosen there remainn−1{\displaystyle n-1}other elements to choose, and each set choosesr−1{\displaystyle r-1}of these remaining elements.[10] Whenn>2r{\displaystyle n>2r}this is the only intersecting family of this size. However, whenn=2r{\displaystyle n=2r}, there is a more general construction. Eachr{\displaystyle r}-elementset can be matched up to itscomplement, the onlyr{\displaystyle r}-elementset from which it is disjoint. Then, choose one set from each of these complementary pairs. For instance, for the same parameters above, this more general construction can be used to form the family where every two sets intersect despite no element belonging to all three sets. In this example, all of the sets have been complemented from the ones in the first example, but it is also possible to complement only some of the sets.[10] Whenn>2r{\displaystyle n>2r},families of the first type (variously known as stars,[1]dictatorships,[11]juntas,[11]centered families,[12]or principal families[13]) are the unique maximum families. In this case, a family of nearly-maximum size has an element which is common to almost all of itssets.[14]This property has been calledstability,[13]although the same term has also been used for a different property, the fact that (for a wide range of parameters) deleting randomly-chosen edges from the Kneser graph does not increase the size of its independentsets.[15] An intersecting family ofr{\displaystyle r}-elementsets may be maximal, in that no further set can be added (even by extending the ground set) without destroying the intersection property, but not of maximum size. An example withn=7{\displaystyle n=7}andr=3{\displaystyle r=3}is the set of seven lines of theFano plane, much less than the Erdős–Ko–Rado boundof 15.[16]More generally, the lines of anyfinite projective planeof orderq{\displaystyle q}form a maximal intersecting family that includes onlyn{\displaystyle n}sets, for the parametersr=q+1{\displaystyle r=q+1}andn=q2+q+1{\displaystyle n=q^{2}+q+1}.The Fano plane is the caseq=2{\displaystyle q=2}of this construction.[17] The smallest possible size of a maximal intersecting family ofr{\displaystyle r}-elementsets, in termsofr{\displaystyle r},is unknown but at least3r{\displaystyle 3r}forr≥4{\displaystyle r\geq 4}.[18]Projective planes produce maximal intersecting families whose number of setsisr2−r+1{\displaystyle r^{2}-r+1},but for infinitely many choices ofr{\displaystyle r}there exist smaller maximal intersecting families ofsize34r2{\displaystyle {\tfrac {3}{4}}r^{2}}.[17] The largest intersecting families ofr{\displaystyle r}-elementsets that are maximal but not maximum have size(n−1r−1)−(n−r−1r−1)+1.{\displaystyle {\binom {n-1}{r-1}}-{\binom {n-r-1}{r-1}}+1.}They are formed from anelementx{\displaystyle x}and anr{\displaystyle r}-elementsetY{\displaystyle Y}notcontainingx{\displaystyle x},by addingY{\displaystyle Y}to the family ofr{\displaystyle r}-elementsets that include bothx{\displaystyle x}and at least one elementofY{\displaystyle Y}.This result is called theHilton–Milner theorem, after its proof byAnthony HiltonandEric Charles Milnerin1967.[19] The original proof of the Erdős–Ko–Rado theorem usedinductiononn{\displaystyle n}.The base case,forn=2r{\displaystyle n=2r},follows easily from the facts that an intersecting family cannot include both a set and itscomplement, and that in this case the bound of the Erdős–Ko–Rado theorem is exactly half the number of allr{\displaystyle r}-elementsets. The induction step forlargern{\displaystyle n}uses a method calledshifting, of substituting elements in intersecting families to make the family smaller inlexicographic orderand reduce it to a canonical form that is easier toanalyze.[20] In 1972,Gyula O. H. Katonagave the following short proof usingdouble counting.[21] However, only some of these intervals can belongtoA{\displaystyle {\mathcal {A}}},because they do not all intersect. Katona's key observation is that at mostr{\displaystyle r}intervals from a single cyclic order may belongtoA{\displaystyle {\mathcal {A}}}.This is because, if(a1,a2,…,ar){\displaystyle (a_{1},a_{2},\dots ,a_{r})}is one of these intervals, then every other interval of the same cyclic order that belongs toA{\displaystyle {\mathcal {A}}}separatesai{\displaystyle a_{i}}fromai+1{\displaystyle a_{i+1}},forsomei{\displaystyle i},by containing precisely one of these two elements. The two intervals that separate these elements are disjoint, so at most one of them can belongtoA{\displaystyle {\mathcal {A}}}.Thus, the number of intervals inA{\displaystyle {\mathcal {A}}}is at most one plus the numberr−1{\displaystyle r-1}of pairs that can beseparated.[21] Based on this idea, it is possible to count thepairs(S,C){\displaystyle (S,C)},whereS{\displaystyle S}is a setinA{\displaystyle {\mathcal {A}}}andC{\displaystyle C}is a cyclic order for whichS{\displaystyle S}is an interval, in two ways. First, for each setS{\displaystyle S}one may generateC{\displaystyle C}by choosing one ofr!{\displaystyle r!}permutations ofS{\displaystyle S}and(n−r)!{\displaystyle (n-r)!}permutations of the remaining elements, showing that the number of pairsis|A|r!(n−r)!{\displaystyle |{\mathcal {A}}|r!(n-r)!}.And second, there are(n−1)!{\displaystyle (n-1)!}cyclic orders, each of which has at mostr{\displaystyle r}intervalsofA{\displaystyle {\mathcal {A}}},so the number of pairs is atmostr(n−1)!{\displaystyle r(n-1)!}.Comparing these two counts gives the inequality|A|r!(n−r)!≤r(n−1)!{\displaystyle |{\mathcal {A}}|r!(n-r)!\leq r(n-1)!}and dividing both sides byr!(n−r)!{\displaystyle r!(n-r)!}gives theresult[21] It is also possible to derive the Erdős–Ko–Rado theorem as a special case of theKruskal–Katona theorem, another important result inextremal set theory.[22]Many other proofs areknown.[23] A generalization of the theorem applies to subsets that are required to have large intersections. This version of the theorem has three parameters:n{\displaystyle n}, the number of elements the subsets are drawn from,r{\displaystyle r}, the size of the subsets, as before, andt{\displaystyle t}, the minimum size of the intersection of any two subsets. For the original form of the Erdős–Ko–Rado theorem,t=1{\displaystyle t=1}.In general, forn{\displaystyle n}large enough with respect to the other two parameters, the generalized theorem states that the size of at{\displaystyle t}-intersectingfamily of subsets is atmost[24](n−tr−t).{\displaystyle {\binom {n-t}{r-t}}.}More precisely, this bound holdswhenn≥(t+1)(r−t+1){\displaystyle n\geq (t+1)(r-t+1)},and does not hold for smaller valuesofn{\displaystyle n}.Whenn>(t+1)(r−t+1){\displaystyle n>(t+1)(r-t+1)},the onlyt{\displaystyle t}-intersectingfamilies of this size are obtained by designatingt{\displaystyle t}elementsas the common intersection of all the subsets, and constructing the family of allr{\displaystyle r}-elementsubsets that include theset{\displaystyle t}designatedelements.[25]The maximal size of at-intersecting family whenn<(t+1)(r−t+1){\displaystyle n<(t+1)(r-t+1)}was determined byAhlswedeand Khachatrian, in theirAhlswede–Khachatrian theorem.[26] The corresponding graph-theoretic formulation of this generalization involvesJohnson graphsin place of Knesergraphs.[27]For large enough values ofn{\displaystyle n}and in particularforn>12r2{\displaystyle n>{\tfrac {1}{2}}r^{2}},both the Erdős–Ko–Rado theorem and its generalization can be strengthened from the independence number to theShannon capacity of a graph: the Johnson graph corresponding to thet{\displaystyle t}-intersectingr{\displaystyle r}-elementsubsets has Shannoncapacity(n−tr−t){\displaystyle {\tbinom {n-t}{r-t}}}.[28] The theorem can also be generalized to families in which everyh{\displaystyle h}subsets have a common intersection. Because this strengthens the condition that every pair intersects (for whichh=2{\displaystyle h=2}),these families have the same bound on their maximum size,(n−1r−1){\displaystyle {\tbinom {n-1}{r-1}}}whenn{\displaystyle n}is sufficiently large. However, in this case the meaning of "sufficiently large" can be relaxed fromn≥2r{\displaystyle n\geq 2r}ton≥hh−1r{\displaystyle n\geq {\tfrac {h}{h-1}}r}.[29] Many results analogous to the Erdős–Ko–Rado theorem, but for other classes of objects than finite sets, are known. These generally involve a statement that the largest families of intersecting objects, for some definition of intersection, are obtained by choosing an element and constructing the family of all objects that include that chosen element. Examples include the following: There is aq-analogof the Erdős–Ko–Rado theorem for intersecting families oflinear subspacesoverfinite fields. IfS{\displaystyle {\mathcal {S}}}is an intersecting family ofr{\displaystyle r}-dimensionalsubspaces of ann{\displaystyle n}-dimensionalvector spaceover a finite field oforderq{\displaystyle q},andn≥2r{\displaystyle n\geq 2r},then|S|≤(n−1r−1)q,{\displaystyle |{\mathcal {S}}|\leq {\binom {n-1}{r-1}}_{q},}where the subscriptqmarks the notation for theGaussian binomial coefficient, the number of subspaces of a given dimension within avector spaceof a larger dimension over a finite field oforderq.In this case, a largest intersecting family of subspaces may be obtained by choosing any nonzero vector and constructing the family of subspaces of the given dimension that all contain the chosenvector.[30] Twopermutationson the same set of elements are defined to be intersecting if there is some element that has the same image under both permutations. On ann{\displaystyle n}-elementset, there is an obvious family of(n−1)!{\displaystyle (n-1)!}intersecting permutations, the permutations that fix one of the elements (thestabilizer subgroupof this element). The analogous theorem is that no intersecting family of permutations can be larger, and that the only intersecting families of size(n−1)!{\displaystyle (n-1)!}are thecosetsof one-element stabilizers. These can be described more directly as the families of permutations that map some fixed element to another fixed element. More generally, for anyt{\displaystyle t}and sufficiently largen{\displaystyle n}, a family of permutations each pair of which hast{\displaystyle t}elements in common has maximum size(n−t)!{\displaystyle (n-t)!}, and the only families of this size are cosets of pointwisestabilizers.[31]Alternatively, in graph theoretic terms, then{\displaystyle n}-elementpermutations correspond to theperfect matchingsof acomplete bipartite graphKn,n{\displaystyle K_{n,n}}and the theorem states that, among families of perfect matchings each pair of which sharet{\displaystyle t}edges, the largest families are formed by the matchings that all containt{\displaystyle t}chosenedges.[32]Another analog of the theorem, forpartitions of a set, includes as a special case the perfect matchings of acomplete graphKn{\displaystyle K_{n}}(withn{\displaystyle n}even). There are(n−1)!!{\displaystyle (n-1)!!}matchings, where!!{\displaystyle !!}denotes thedouble factorial. The largest family of matchings that pairwise intersect (meaning that they have an edge in common) has size(n−3)!!{\displaystyle (n-3)!!}and is obtained by fixing one edge and choosing all ways of matching the remainingn−2{\displaystyle n-2}vertices.[33] Apartial geometryis a system of finitely many abstract points and lines, satisfying certain axioms including the requirement that all lines contain the same number of points and all points belong to the same number of lines. In a partial geometry, a largest system of pairwise-intersecting lines can be obtained from the set of lines through any singlepoint.[34] Asigned setconsists of a set together with sign function that maps each elementto{1,−1}{\displaystyle \{1,-1\}}.Two signed sets may be said to intersect when they have a common element that has the same sign in each of them. Then an intersecting family ofr{\displaystyle r}-elementsigned sets, drawn from ann{\displaystyle n}-elementuniverse, consists of at most2r−1(n−1r−1){\displaystyle 2^{r-1}{\binom {n-1}{r-1}}}signed sets. This number of signed sets may be obtained by fixing one element and its sign and letting the remainingr−1{\displaystyle r-1}elements and signsvary.[35] Forstringsoflengthn{\displaystyle n}over analphabetofsizeq{\displaystyle q},two strings can be defined to intersect if they have a position where both share the same symbol. The largest intersecting families are obtained by choosing one position and a fixed symbol for that position, and letting the rest of the positions vary arbitrarily. These families consist ofqn−1{\textstyle q^{n-1}}strings, and are the only pairwise intersecting families of this size. More generally, the largest families of strings in which every two havet{\displaystyle t}positions with equal symbols are obtained by choosingt+2i{\displaystyle t+2i}positions and symbols for those positions, for a numberi{\displaystyle i}that depends onn{\displaystyle n},q{\displaystyle q}, andt{\displaystyle t}, and constructing the family of strings that each have at leastt+i{\displaystyle t+i}of the chosen symbols. These results can be interpreted graph-theoretically in terms of theHamming scheme.[36] An unprovenconjecture, posed byGil Kalaiand Karen Meagher, concerns another analog for the family of triangulations of aconvex polygonwithn{\displaystyle n}vertices. The number of all triangulations is aCatalan numberCn−2{\displaystyle C_{n-2}},and the conjecture states that a family of triangulations every pair of which shares an edge has maximumsizeCn−3{\displaystyle C_{n-3}}.An intersecting family of size exactlyCn−3{\displaystyle C_{n-3}}may be obtained by cutting off a single vertex of the polygon by a triangle, and choosing all ways of triangulating the remaining(n−1){\displaystyle (n-1)}-vertexpolygon.[37] The Erdős–Ko–Rado theorem can be used to prove the following result inprobability theory. Letxi{\displaystyle x_{i}}be independent0–1 random variableswith probabilityp≥12{\displaystyle p\geq {\tfrac {1}{2}}}of being one, and letc(x→){\displaystyle c({\vec {x}})}be any fixedconvex combinationof these variables. ThenPr[c(x→)≥12]≥p.{\displaystyle \Pr \left[c({\vec {x}})\geq {\tfrac {1}{2}}\right]\geq p.}The proof involves observing that subsets of variables whoseindicator vectorshave large convex combinations must be non-disjoint and using the Erdős–Ko–Rado theorem to bound the number of thesesubsets.[38] The stability properties of the Erdős–Ko–Rado theorem play a key role in an efficientalgorithmfor finding monochromatic edges inimproper coloringsof Kneser graphs.[39]The Erdős–Ko–Rado theorem has also been used to characterize the symmetries of the space ofphylogenetic trees.[40]
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Ko%E2%80%93Rado_theorem
Filesystem-level encryption,[1]often calledfile-based encryption,FBE, orfile/folder encryption, is a form ofdisk encryptionwhere individual files or directories areencryptedby thefile systemitself. This is in contrast to thefull disk encryptionwhere the entire partition or disk, in which the file system resides, is encrypted. Types of filesystem-level encryption include: The advantages of filesystem-level encryption include: Unlike cryptographic file systems orfull disk encryption, general-purpose file systems that include filesystem-level encryption do not typically encrypt file systemmetadata, such as the directory structure, file names, sizes or modification timestamps. This can be problematic if the metadata itself needs to be kept confidential. In other words, if files are stored with identifying file names, anyone who has access to the physical disk can know which documents are stored on the disk, although not the contents of the documents. One exception to this is the encryption support being added to theZFSfilesystem. Filesystem metadata such as filenames, ownership, ACLs, extended attributes are all stored encrypted on disk. The ZFS metadata relating to the storage pool is stored inplaintext, so it is possible to determine how many filesystems (datasets) are available in the pool, including which ones are encrypted. The content of the stored files and directories remain encrypted. Another exception isCryFSreplacement forEncFS. Cryptographic file systems are specialized (not general-purpose) file systems that are specifically designed with encryption and security in mind. They usually encrypt all the data they contain – including metadata. Instead of implementing an on-disk format and their ownblock allocation, these file systems are often layered on top of existing file systems e.g. residing in a directory on a host file system. Many such file systems also offer advanced features, such asdeniable encryption, cryptographically secure read-onlyfile system permissionsand different views of the directory structure depending on the key or user ... One use for a cryptographic file system is when part of an existing file system issynchronizedwith 'cloud storage'. In such cases the cryptographic file system could be 'stacked' on top, to help protect data confidentiality.
https://en.wikipedia.org/wiki/Filesystem-level_encryption
Dekker's algorithmis the first known correct solution to themutual exclusionproblem inconcurrent programmingwhere processes only communicate via shared memory. The solution is attributed toDutchmathematicianTh. J. DekkerbyEdsger W. Dijkstrain an unpublished paper on sequential process descriptions[1]and his manuscript oncooperating sequential processes.[2]It allows two threads to share a single-use resource without conflict, using onlyshared memoryfor communication. It avoids the strict alternation of a naïve turn-taking algorithm, and was one of the firstmutual exclusionalgorithms to be invented. If two processes attempt to enter acritical sectionat the same time, the algorithm will allow only one process in, based on whoseturnit is. If one process is already in the critical section, the other process willbusy waitfor the first process to exit. This is done by the use of two flags,wants_to_enter[0]andwants_to_enter[1], which indicate an intention to enter the critical section on the part of processes 0 and 1, respectively, and a variableturnthat indicates who has priority between the two processes. Dekker's algorithm can be expressed inpseudocode, as follows.[3] Processes indicate an intention to enter the critical section which is tested by the outer while loop. If the other process has not flagged intent, the critical section can be entered safely irrespective of the current turn. Mutual exclusion will still be guaranteed as neither process can become critical before setting their flag (implying at least one process will enter the while loop). This also guarantees progress as waiting will not occur on a process which has withdrawn intent to become critical. Alternatively, if the other process's variable was set, the while loop is entered and the turn variable will establish who is permitted to become critical. Processes without priority will withdraw their intention to enter the critical section until they are given priority again (the inner while loop). Processes with priority will break from the while loop and enter their critical section. Dekker's algorithm guaranteesmutual exclusion, freedom fromdeadlock, and freedom fromstarvation. Let us see why the last property holds. Suppose p0 is stuck inside thewhile wants_to_enter[1]loop forever. There is freedom from deadlock, so eventually p1 will proceed to its critical section and setturn = 0(and the value of turn will remain unchanged as long as p0 doesn't progress). Eventually p0 will break out of the innerwhile turn ≠ 0loop (if it was ever stuck on it). After that it will setwants_to_enter[0]to true and settle down to waiting forwants_to_enter[1]to become false (sinceturn = 0, it will never do the actions in the while loop). The next time p1 tries to enter its critical section, it will be forced to execute the actions in itswhile wants_to_enter[0]loop. In particular, it will eventually setwants_to_enter[1]to false and get stuck in thewhile turn ≠ 1loop (since turn remains 0). The next time control passes to p0, it will exit thewhile wants_to_enter[1]loop and enter its critical section. If the algorithm were modified by performing the actions in thewhile wants_to_enter[1]loop without checking ifturn = 0, then there is a possibility of starvation. Thus all the steps in the algorithm are necessary. One advantage of this algorithm is that it doesn't require specialtest-and-set(atomic read/modify/write) instructions and is therefore highly portable between languages and machine architectures. One disadvantage is that it is limited to two processes and makes use ofbusy waitinginstead of process suspension. (The use of busy waiting suggests that processes should spend a minimum amount of time inside the critical section.) Modern operating systems provide mutual exclusion primitives that are more general and flexible than Dekker's algorithm. However, in the absence of actual contention between the two processes, the entry and exit from critical section is extremely efficient when Dekker's algorithm is used. Many modernCPUsexecute their instructions in an out-of-order fashion; even memory accesses can be reordered (seememory ordering). This algorithm won't work onSMPmachines equipped with these CPUs without the use ofmemory barriers. Additionally, many optimizing compilers can perform transformations that will cause this algorithm to fail regardless of the platform. In many languages, it is legal for a compiler to detect that the flag variableswants_to_enter[0]andwants_to_enter[1]are never accessed in the loop. It can then remove the writes to those variables from the loop, using a process calledloop-invariant code motion. It would also be possible for many compilers to detect that theturnvariable is never modified by the inner loop, and perform a similar transformation, resulting in a potentialinfinite loop. If either of these transformations is performed, the algorithm will fail, regardless of architecture. To alleviate this problem,volatilevariables should be marked as modifiable outside the scope of the currently executing context. For example, in C, C++, C# or Java, one would annotate these variables as 'volatile'. Note however that the C/C++ "volatile" attribute only guarantees that the compiler generates code with the proper ordering; it does not include the necessarymemory barriersto guarantee in-orderexecutionof that code.C++11atomic variables can be used to guarantee the appropriate ordering requirements — by default, operations on atomic variables are sequentially consistent so if the wants_to_enter and turn variables are atomic a naive implementation will "just work". Alternatively, ordering can be guaranteed by the explicit use of separate fences, with the load and store operations using a relaxed ordering.
https://en.wikipedia.org/wiki/Dekker%27s_algorithm
In mathematics, theClarke generalized derivativesare types generalized ofderivativesthat allow for the differentiation of nonsmooth functions. The Clarke derivatives were introduced byFrancis Clarkein 1975.[1] For alocally Lipschitz continuousfunctionf:Rn→R,{\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} ,}theClarke generalized directional derivativeoff{\displaystyle f}atx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}in the directionv∈Rn{\displaystyle v\in \mathbb {R} ^{n}}is defined asf∘(x,v)=lim supy→x,h↓0f(y+hv)−f(y)h,{\displaystyle f^{\circ }(x,v)=\limsup _{y\rightarrow x,h\downarrow 0}{\frac {f(y+hv)-f(y)}{h}},}wherelim sup{\displaystyle \limsup }denotes thelimit supremum. Then, using the above definition off∘{\displaystyle f^{\circ }}, theClarke generalized gradientoff{\displaystyle f}atx{\displaystyle x}(also called theClarkesubdifferential) is given as∂∘f(x):={ξ∈Rn:⟨ξ,v⟩≤f∘(x,v),∀v∈Rn},{\displaystyle \partial ^{\circ }\!f(x):=\{\xi \in \mathbb {R} ^{n}:\langle \xi ,v\rangle \leq f^{\circ }(x,v),\forall v\in \mathbb {R} ^{n}\},}where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }represents aninner productof vectors inR.{\displaystyle \mathbb {R} .}Note that the Clarke generalized gradient is set-valued—that is, at eachx∈Rn,{\displaystyle x\in \mathbb {R} ^{n},}the function value∂∘f(x){\displaystyle \partial ^{\circ }\!f(x)}is a set. More generally, given a Banach spaceX{\displaystyle X}and a subsetY⊂X,{\displaystyle Y\subset X,}the Clarke generalized directional derivative and generalized gradients are defined as above for alocally Lipschitz continuousfunctionf:Y→R.{\displaystyle f:Y\to \mathbb {R} .}
https://en.wikipedia.org/wiki/Clarke_generalized_derivative
Incomputer programming, anoperatoris aprogramming languageconstruct that provides functionality that may not be possible to define as a user-definedfunction(i.e.sizeofinC) or hassyntaxdifferent than a function (i.e.infixaddition as ina+b). Like other programming language concepts,operatorhas a generally accepted, although debatable meaning among practitioners while at the same time each language gives it specific meaning in that context, and therefore the meaning varies by language. Some operators are represented with symbols – characters typically not allowed for a functionidentifier– to allow for presentation that is more familiar looking than typical function syntax. For example, a function that tests for greater-than could be namedgt, but many languages provide an infix symbolic operator so that code looks more familiar. For example, this: if gt(x, y) then return Can be: if x > y then return Some languages allow a language-defined operator to be overridden with user-defined behavior and some allow for user-defined operator symbols. Operators may also differ semantically from functions. For example,short-circuitBoolean operations evaluate later arguments only if earlier ones are not false. Many operators differ syntactically from user-defined functions. In most languages, a function isprefix notationwith fixedprecedencelevel and associativity and often with compulsoryparentheses(e.g.Func(a)or(Func a)inLisp). In contrast, many operators are infix notation and involve different use of delimiters such as parentheses. In general, an operator may be prefix, infix, postfix,matchfix,circumfixor bifix,[1][2][3][4][5]and the syntax of anexpressioninvolving an operator depends on itsarity(number ofoperands), precedence, and (if applicable),associativity. Most programming languages supportbinary operatorsand a fewunary operators, with a few supporting more operands, such as the?:operator in C, which is ternary. There are prefix unary operators, such as unary minus-x, and postfix unary operators, such aspost-incrementx++; and binary operations are infix, such asx + yorx = y. Infix operations of higher arity require additional symbols, such as theternary operator?: in C, written asa ? b : c– indeed, since this is the only common example, it is often referred to astheternary operator. Prefix and postfix operations can support any desired arity, however, such as1 2 3 4 +. The semantics of an operator may significantly differ from that of a normal function. For reference, addition is evaluated like a normal function. For example,x + ycan be equivalent to a functionadd(x, y)in that the arguments are evaluated and then the functional behavior is applied. However,assignmentis different. For example, givena = bthe targetaisnotevaluated. Instead its value is replaced with the value ofb. Thescope resolutionand element access operators (as inFoo::Baranda.b, respectively, in the case of e.g.C++) operate on identifier names; not values. In C, for instance, the array indexing operator can be used for both read access as well as assignment. In the following example, theincrement operatorreads the element value of an array and then assigns the element value. The C++<<operator allows forfluentsyntax by supporting a sequence of operators that affect a single argument. For example: Some languages provide operators that aread hoc polymorphic– inherently overloaded. For example, inJavathe+operator sumsnumbersorconcatenatesstrings. Some languages support user-definedoverloading(such asC++andFortran). An operator, defined by the language, can beoverloadedto behave differently based on the type of input. Some languages (e.g. C, C++ andPHP) define a fixed set of operators, while others (e.g.Prolog,[6]Seed7,[7]F#,OCaml,Haskell) allow for user-defined operators. Some programming languages restrict operator symbols to special characters like+or:=while others allow names likediv(e.g.Pascal), and even arbitrary names (e.g.Fortranwhere an upto 31 character long operator name is enclosed between dots[8]). Most languages do not support user-defined operators since the feature significantly complicates parsing. Introducing a new operator changes the arity and precedencelexical specificationof the language, which affects phrase-levellexical analysis. Custom operators, particularly via runtime definition, often make correctstatic analysisof a program impossible, since the syntax of the language may be Turing-complete, so even constructing the syntax tree may require solving the halting problem, which is impossible. This occurs forPerl, for example, and some dialects ofLisp. If a language does allow for defining new operators, the mechanics of doing so may involve meta-programming – specifying the operator in a separate language. Some languages implicitly convert (akacoerce) operands to be compatible with each other. For example,Perlcoercion rules cause12 + "3.14"to evaluate to15.14. The string literal"3.14"is converted to the numeric value 3.14 before addition is applied. Further,3.14is treated as floating point so the result is floating point even though12is an integer literal.JavaScriptfollows different rules so that the same expression evaluates to"123.14"since12is converted to a string which is then concatenated with the second operand. In general, a programmer must be aware of the specific rules regarding operand coercion in order to avoid unexpected and incorrect behavior. The following table shows the operator features in several programming languages: non-ASCII:¬ +× ⊥ ↑ ↓ ⌊ ⌈ × ÷ ÷× ÷* □ ≤ ≥ ≠ ∧ ∨ ×:= ÷:= ÷×:= ÷*:= %×:= :≠:
https://en.wikipedia.org/wiki/Compound_operator_(computing)
Univariateis a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry.[1]Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed.[2] Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the termscategoricalunivariate data andnumericalunivariate data are used to distinguish between these types. Categorical univariate data consists of non-numericalobservationsthat may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use eithernominalorordinalscale of measurement.[3] Numerical univariate data consists of observations that are numbers. They are obtained using eitherintervalorratioscale of measurement. This type of univariate data can be classified even further into two subcategories:discreteandcontinuous.[2]A numerical univariate data is discrete if the set of all possible values isfiniteor countablyinfinite. Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people). Univariate analysis is the simplest form of analyzing data.Unimeans "one", so the data has only one variable (univariate).[4]Univariate data requires to analyze eachvariableseparately. Data is gathered for the purpose of answering a question, or more specifically, a research question. Univariate data does not answer research questions about relationships between variables, but rather it is used to describe one characteristic or attribute that varies from observation to observation.[5]Usually there are two purposes that a researcher can look for. The first one is to answer a research question with descriptive study and the second one is to get knowledge about howattributevaries with individual effect of a variable inregression analysis. There are some ways to describe patterns found in univariate data which include graphical methods, measures of central tendency and measures of variability.[6] Like other forms of statistics, it can beinferentialordescriptive. The key fact is that only one variable is involved. Univariate analysis can yield misleading results in cases in whichmultivariate analysisis more appropriate. Central tendency is one of the most common numerical descriptive measures. It is used to estimate the central location of the univariate data by the calculation ofmean,medianandmode.[7]Each of these calculations has its own advantages and limitations. The mean has the advantage that its calculation includes each value of the data set, but it is particularly susceptible to the influence ofoutliers. The median is a better measure when the data set contains outliers. The mode is simple to locate. One is not restricted to using only one of these measures of central tendency. If the data being analyzed is categorical, then the only measure of central tendency that can be used is the mode. However, if the data is numerical in nature (ordinalorinterval/ratio) then the mode, median, or mean can all be used to describe the data. Using more than one of these measures provides a more accurate descriptive summary of central tendency for the univariate.[8] A measure ofvariabilityordispersion(deviation from the mean) of a univariate data set can reveal the shape of a univariate data distribution more sufficiently. It will provide some information about the variation among data values. The measures of variability together with the measures of central tendency give a better picture of the data than the measures of central tendency alone.[9]The three most frequently used measures of variability arerange,varianceandstandard deviation.[10]The appropriateness of each measure would depend on the type of data, the shape of the distribution of data and which measure of central tendency are being used. If the data is categorical, then there is no measure of variability to report. For data that is numerical, all three measures are possible. If the distribution of data is symmetrical, then the measures of variability are usually the variance and standard deviation. However, if the data areskewed, then the measure of variability that would be appropriate for that data set is the range.[3] Descriptive statistics describe a sample or population. They can be part ofexploratory data analysis.[11] The appropriate statistic depends on thelevel of measurement. For nominal variables, afrequency tableand a listing of themode(s)is sufficient. For ordinal variables themediancan be calculated as a measure ofcentral tendencyand therange(and variations of it) as a measure of dispersion. For interval level variables, thearithmetic mean(average) andstandard deviationare added to the toolbox and, for ratio level variables, we add thegeometric meanandharmonic meanas measures of central tendency and thecoefficient of variationas a measure of dispersion. For interval and ratio level data, further descriptors include the variable's skewness andkurtosis. Inferential methods allow us to infer from a sample to a population.[11]For a nominal variable a one-way chi-square (goodness of fit) test can help determine if our sample matches that of some population.[12]For interval and ratio level data, aone-sample t-testcan let us infer whether the mean in our sample matches some proposed number (typically 0). Other available tests of location include the one-samplesign testandWilcoxon signed rank test. The most frequently used graphical illustrations for univariate data are: Frequency is how many times a number occurs. The frequency of an observation in statistics tells us the number of times the observation occurs in the data. For example, in the following list of numbers {1, 2, 3, 4, 6, 9, 9, 8, 5, 1, 1, 9, 9, 0, 6, 9}, the frequency of the number 9 is 5 (because it occurs 5 times in this data set). Bar chart is agraphconsisting ofrectangularbars. These bars actually representsnumberor percentage of observations of existing categories in a variable. Thelengthorheightof bars gives a visual representation of the proportional differences among categories. Histogramsare used to estimate distribution of the data, with the frequency of values assigned to a value range called abin.[13] Pie chart is a circle divided into portions that represent the relative frequencies or percentages of a population or a sample belonging to different categories. Univariate distributionis a dispersal type of a single random variable described either with aprobability mass function(pmf) fordiscrete probability distribution, orprobability density function(pdf) forcontinuous probability distribution.[14]It is not to be confused withmultivariate distribution.
https://en.wikipedia.org/wiki/Univariate_analysis
Automatic Warning System(AWS) is a railway safety system invented and predominantly used in the United Kingdom. It provides a train driver with an audible indication of whether the nextsignalthey are approaching is clear or at caution.[1]Depending on the upcoming signal state, the AWS will either produce a 'horn' sound (as a warning indication), or a 'bell' sound (as a clear indication). If the train driver fails to acknowledge a warning indication, an emergency brake application is initiated by the AWS; if the driver correctly acknowledges the warning indication, by pressing an acknowledgement button, then a visual 'sunflower' is displayed to the driver, as a reminder of the warning. AWS is a system based on trains detecting magnetic fields. These magnetic fields are created by permanent magnets and electromagnets installed on the track. The polarity and sequence of magnetic fields detected by a train determine the type of indication given to the train driver. A magnet, known as anAWS magnetis installed on the track center line. Themagnetic fieldof the magnet is set based on the next signal aspect.[1]The train detects the polarity of magnetic field via an AWS receiver, permanently mounted under the train.[1] An AWS magnet is made up of 1 permanent magnet, and an optional electromagnet. The permanent magnet is generally uncontrollable, and always produces a constant magnetic field of unchanging polarity. A train running over the permanent magnet will deliver an AWS warning indication to the train driver. The optional electromagnet can be used to provide the train driver with an AWS clear indication. If the train AWS detects a second magnetic field of a certain polarity after the first permanent magnet, then the AWS displays a clear indication instead of a warning indication. The train detects the electromagnet polarity after the permanent magnet polarity. This is because the optional electromagnet is always installed after the permanent magnet (in the direction of travel). The electromagnet is connected to the greensignal aspect, so the driver will only receive an AWS clear indication if the signal is clear (green). The permanent magnet always produces asouth pole. If the electromagnet is energized to produce a north pole, the AWS will give the driver an AWS clear indication. Multiple unit trains have an AWS receiver at each end. Vehicles that can operate singly (single car DMUs and locomotives) only have one; this could be either at the front or rear depending on the direction the vehicle is traveling in. The equipment on a train consists of; The polarities in this example are relevant to the UK. The permanent magnet produces a south pole in the UK. Other countries may use permanent magnet that produces a north pole. The key operational principle is that the electromagnet produces the opposite pole of the permanent magnet. A train is driving towards a signal that shows clear (green). The train runs over the AWS magnet (which is two magnets, first a permanent magnet and then an electromagnet). The electromagnet is energized. The AWS receiver detects a magnetic field in the sequence:South, North. The south pole comes from the permanent magnet, and the north pole comes from the electromagnet. This south then north sequence gives an AWS clear indication to the driver. A train is driving towards a signal that shows caution (yellow). The train runs over the AWS magnet (which is two magnets, first a permanent magnet and then an electromagnet). The electromagnet is de-energized (i.e. it is not powered). The AWS receiver detects only one magnetic field in the sequence:South. The reason only one magnetic field was detected is because the electromagnet was not energized. This makes the electromagnet invisible to the AWS receiver. This south pole by itself results in an AWS warning indication to the driver. As the train approaches a signal, it will pass over an AWS magnet. The AWS visual indicator ('sunflower') in the driver's cab will change toall black. If the signal being approached is displaying a 'clear' aspect, then AWS will sound a bell tone (modern trains have an electronic sounder that makes a distinctive 'ping') and leave the 'sunflower' black. This AWS clear indication lets the driver know that the next signal is showing 'clear' and that the AWS system is working. If the next signal is displaying a restrictive aspect (e.g. caution or stop) the AWS audible indicator will sound a continuous alarm. The driver then has approximately 2 seconds to press and release the AWS acknowledgement button (if the driver keeps the button held down, the AWS will not be cancelled).[1]After pressing the AWS acknowledgement button, the AWS audible indicator is silenced and the AWS visual indicator changes to a pattern of black and yellow spokes. This yellow spoke pattern persists until the train reaches the next AWS magnet and serves as a reminder to the driver of the restrictive signal aspect they passed. As afail-safemechanism, if the driver fails to press the AWS acknowledgement button for a warning indication in sufficient time, the emergency brakes will automatically apply, bringing the train to a stop. After stopping, the driver can now press the AWS acknowledgement button, and the brakes will automatically release after a safety time out period has elapsed. AWS works in the same way as for signals, except that a fixed magnet is located at the service braking distance before the speed reduction and no electromagnet is provided (or needed). A single fixed magnet will always cause a warning indication to the driver, which the driver must acknowledge to prevent the emergency brake applying. A trackside warning board will also advise the driver of the speed requirement ahead. This list of limitations is not exhaustive: Early devices used a mechanical connection between the signal and the locomotive. In 1840, the locomotive engineerEdward Buryexperimented with a system whereby a lever at track level, connected to the signal, sounded the locomotive's whistle and turned a cab-mounted red lamp. Ten years later, ColonelWilliam Yollandof theRailway Inspectoratewas calling for a system that not only alerted the driver but also automatically applied the brakes when signals were passed at danger but no satisfactory method of bringing this about was found.[2] In 1873, United Kingdom Patent No. 3286 was granted to Charles Davidson and Charles Duffy Williams for a system in which, if a signal were passed at danger, a trackside lever operated the locomotive's whistle, applied the brake, shut off steam and alerted the guard.[3]Numerous similar patents followed but they all bore the same disadvantage – that they could not be used at higher speeds for risk of damage to the mechanism – and they came to nothing. In Germany, the Kofler system used arms projecting from signal posts to engage with a pair of levers, one representingcautionand the otherstop, mounted on the locomotive cab roof. To address the problem of operation at speed, the sprung mounting for the levers was connected directly to the locomotive'saxle boxto ensure correct alignment.[4]When Berlin'sS-Bahnwas electrified in 1929, a development of this system, with the contact levers moved from the roofs to the sides of the trains, was installed at the same time.[citation needed] The first useful device was invented byVincent Ravenof theNorth Eastern Railwayin 1895, patent number 23384. Although this provided audible warning only, it did indicate to the driver when points ahead were set for a diverging route. By 1909, the company had installed it on about 100 miles of track. In 1907Frank Wyatt Prenticepatented a radio signalling system using a continuous cable laid between the rails energized by aspark generatorto relay "Hertzian Waves" to the locomotive. When the electrical waves were active they caused metal filings in acohereron the locomotive to clump together and allow a current from a battery to pass. The signal was turned off if theblockwere not "clear"; no current passed through the coherer and arelayturned a white or green light in the cab to red and applied the brakes.[5]TheLondon & South Western Railwayinstalled the system on itsHampton Court branch linein 1911, but shortly after removed it when the line waselectrified.[6] The first system to be put into wide use was developed in 1905 by theGreat Western Railway(GWR) and protected by UK patents 12661 and 25955. Its benefits over previous systems were that it could be used at high speed and that it sounded a confirmation in the cab when a signal was passed at clear. In the final version of the GWR system, the locomotives were fitted with asolenoid-operated valve into the vacuum train pipe, maintained in the closed position by a battery. At each distant signal, a long ramp was placed between the rails. This ramp consisted of a straight metal blade set edge-on, almost parallel to the direction of travel (the blade was slightly offset from parallel so in its fixed position it would not wear a groove into the locomotives' contact shoes), mounted on a wooden support. As the locomotive passed over the ramp, a sprung contact shoe beneath the locomotive was lifted and the battery circuit holding closed the brake valve was broken. In the case of a clear signal, current from a lineside battery energising the ramp (but at opposite polarity) passed to the locomotive through the contact and maintained the brake valve in the closed position, with the reversed-polarity current ringing a bell in the cab. To ensure that the mechanism had time to act when the locomotive was travelling at high speed, and the external current therefore supplied only for an instant, a "slow releasing relay" both extended the period of operation and supplemented the power from the external supply with current from the locomotive battery. Each distant signal had its own battery, operating at 12.5 V or more; theresistanceif the power came directly from the controlling signal box was thought too great (the locomotive equipment required 500mA). Instead, a 3 V circuit from a switch in the signal box operated arelayin the battery box. When the signal was at 'caution' or 'danger', the ramp battery was disconnected and so could not replace the locomotive's battery current: the brake valve solenoid would then be released causing air to be admitted to the vacuum train pipe via a siren which provided an audible warning as well as slowly applying the train brakes. The driver was then expected to cancel the warning (restoring the system to its normal state) and apply the brakes under his own control - if he did not the brake valve solenoid would remain open, causing all vacuum to be lost and the brakes to be fully applied after about 15 seconds. The warning was cancelled by the driver depressing a spring-laden toggle lever on the ATC apparatus in the cab; the key and circuitry was arranged so that it was the lever returning to its normal position after being depressed and not the depressing of the lever that reset the system - this was to prevent the system being overridden by drivers jamming the lever in the downward position or the lever accidentally becoming stuck in such a position. In normal use the locomotive battery was subject to constant drain holding closed the valve in the vacuum train pipe so to keep this to a minimum an automatic cut-off switch was incorporated which disconnected the battery when the locomotive was not in use and the vacuum in the train pipe had dropped away.[7] It was possible for specially equipped GWR locomotives to operate over shared lineselectrifiedon the third-rail principle (Smithfield Market,Paddington SuburbanandAddison Road). At the entrance to the electrified sections a particular, high-profile contact ramp (4+1⁄2in [110 mm] instead of the usual2+1⁄2in [64 mm]) raised the locomotive's contact shoe until it engaged with a ratchet on the frame. A corresponding raised ramp at the end of the electrified section released the ratchet. It was found, however, that the heavy traction current could interfere with the reliable operation of the on-board equipment when traversing these routes and it was for this reason that, in 1949, the otherwise "well proven" GWR system was not selected as the national standard (see below).[7][8] Notwithstanding the heavy commitment of maintaining the lineside and locomotive batteries, the GWR installed the equipment on all its main lines. For many years,Western Region(successors to the GWR) locomotives were dual fitted with both GWR ATC and BR AWS system. By the 1930s, other railway companies, under pressure from theMinistry of Transport, were considering systems of their own. A non-contact method based onmagnetic inductionwas preferred, to eliminate the problems caused by snowfall and day-to-day wear of the contacts which had been discovered in existing systems. The Strowger-Hudd system of Alfred Ernest Hudd (c.1883– 1958) used a pair of magnets, one a permanent magnet and one an electro-magnet, acting in sequence as the train passed over them. Hudd patented his invention and offered it for development to theAutomatic Telephone Manufacturing Companyof Liverpool (a subsidiary of theStrowger Automatic Telephone Exchange Companyof Chicago, Illinois).[9][10]It was tested by theSouthern Railway,London & North Eastern Railwayand theLondon, Midland & Scottish Railwaybut these trials came to nothing. In 1948 Hudd, now working for the LMS, equipped theLondon, Tilbury and Southend line, a division of the LMS, with his system. It was successful andBritish Railwaysdeveloped the mechanism further by providing a visual indication in the cab of the aspect of the last signal passed. In 1956, the Ministry of Transport evaluated the GWR, LTS and BR systems and selected the one developed by BR as standard for Britain's railways. This was in response to theHarrow & Wealdstone accidentin 1952.[8] AWS was later extended to give warnings for;[11] AWS was based on a 1930 system developed by Alfred Ernest Hudd[9]and marketed as the "Strowger-Hudd" system. An earlier contact system, installed on theGreat Western Railwaysince 1906 and known asautomatic train control(ATC), was gradually supplanted by AWS within theWestern Region of British Railways. Network Rail(NR) AWS consists of: The system works on a set/reset principle. When the signal is at 'clear' or green ("off"), the electromagnet is energised. As the train passes, the permanent magnet sets the system. A short time later, as the train moves forward, the electromagnet resets the system. Once so reset, a bell is sounded (a chime on newer stock) and the indicator is set to all black if it is not already so. No acknowledgement is required from the driver. The system must be reset within one second of being set, otherwise it behaves as for a warning indication. An additional safeguard is included in the distant-signal control wiring to ensure the AWS "clear" indication is only given when the distant is proved "off" – mechanical semaphore distants have a contact in the electromagnet coil circuit closed only when the arm is raised or lowered by at least 27.5 degrees. Colour-light signals have a current sensing relay in the lamp lighting circuit to prove the signal alight, this is used in combination with the relay controlling the green aspect to energise the AWS electro-magnet. In a Solid State Interlocking the signal module has a "Green-Proved" output from its driver electronics that is used to energise the electromagnet. When the distant signal is at 'caution' or yellow (on), the electro-magnet is de-energised. As the train passes, the permanent magnet sets the system. However, since the electromagnet is de-energised, the system is not reset. After the one-second delay within which the system can be reset, a horn warning is given until the driver acknowledges by pressing a plunger. If the driver fails to acknowledge the warning within 2.75 seconds, thebrakesare automatically applied. If the driver does acknowledge the warning, the indicator disk changes to yellow and black, to remind the driver that they have acknowledged a warning. The yellow and black indication persists until the next signal and serves as a reminder between signals that the driver is proceeding under caution. The one-second delay before the horn sounds allows the system to operate correctly down to speeds as low as1+3⁄4mph (2.8 km/h). Below this speed, the caution horn warning will always be given, but it will be automatically cancelled when the electromagnet resets the system if the driver has not already done so. The display will indicate all black once the system resets itself. The system isfail-safesince, in the event of a loss of power, only the electro-magnet is affected and therefore all trains passing will receive a warning. The system suffers one drawback in that on single track lines, the track equipment will set the AWS system on a train travelling in the opposite direction from that for which the track equipment is intended but not reset it as the electromagnet is encountered before the permanent magnet. To overcome this, a suppressor magnet may be installed in place of an ordinary permanent magnet. When energised, its suppressing coil diverts the magnetic flux from the permanent magnet so that no warning is received on the train. The suppressor magnet is fail-safe since loss of power will cause it to act like an ordinary permanent magnet. A cheaper alternative is the installation of a lineside sign that notifies the driver to cancel and ignore the warning. This sign is a blue square board with a whiteSt Andrew's crosson it (or a yellow board with a black cross, if provided in conjunction with a temporary speed restriction). With mechanical signalling, the AWS system was installed only at distant signals but, with multi-aspect signalling, it is fitted at all main line signals. All signal aspects, except green, cause the horn to sound and the indicator disc to change to yellow on black. AWS equipment without electromagnets is fitted at locations where a caution signal is invariably required or where a temporary caution is needed (for example, a temporary speed restriction). This is a secondary advantage of the system because temporary AWS equipment need only contain a permanent magnet. No electrical connection or supply is needed. In this case, the warning indication in the cab will persist until the next green signal is encountered. To verify that the on-train equipment is functioning correctlymotive power depotexit lines are fitted with a 'Shed Test Inductor' that produces a warning indication for vehicles entering service. Due to the low speed used on such lines the size of the track equipment is reduced from that found on the operational network. 'Standard Strength' magnets are used everywhere except in DCthird railelectrification areas and are painted yellow. The minimum field strength to operate the on-train equipment is 2milliteslas(measured 125 mm [5 in] above the track equipment casing). Typical track equipment produces a field of 5 mT (measured under the same conditions). Shed Test Inductors typically produce a field of 2.5 mT (measured under the same conditions). Where DC third rail electrification is installed 'Extra Strength' magnets are fitted and are painted green. This is because the current in the third rail produces a magnetic field of its own which would swamp the 'Standard Strength' magnets. AWS is provided at most main aspect signals on running lines, though there are some exceptions:[1] Because the permanent magnet is located in the centre of the track, it operates in both directions. The permanent magnet can be suppressed by an electric coil of suitable strength. Where signals applying to opposing directions of travel on the same line are suitably positioned relative to each other (i.e. facing each other and about 400yds apart), common track equipment may be used, comprising an unsuppressed permanent magnet sandwiched between with both signals' electro-magnets. The BR AWS system is also used in:
https://en.wikipedia.org/wiki/Automatic_Warning_System
Inoperator theory, abounded operatorT:X→Ybetweennormed vector spacesXandYis said to be acontractionif itsoperator norm||T|| ≤ 1. This notion is a special case of the concept of acontraction mapping, but every bounded operator becomes a contraction after suitable scaling. The analysis of contractions provides insight into the structure of operators, or a family of operators. The theory of contractions onHilbert spaceis largely due toBéla Szőkefalvi-NagyandCiprian Foias. IfTis a contraction acting on aHilbert spaceH{\displaystyle {\mathcal {H}}}, the following basic objects associated withTcan be defined. Thedefect operatorsofTare the operatorsDT= (1 −T*T)½andDT*= (1 −TT*)½. The square root is thepositive semidefinite onegiven by thespectral theorem. Thedefect spacesDT{\displaystyle {\mathcal {D}}_{T}}andDT∗{\displaystyle {\mathcal {D}}_{T*}}are the closure of the ranges Ran(DT) and Ran(DT*) respectively. The positive operatorDTinduces an inner product onH{\displaystyle {\mathcal {H}}}. The inner product space can be identified naturally with Ran(DT). A similar statement holds forDT∗{\displaystyle {\mathcal {D}}_{T*}}. Thedefect indicesofTare the pair The defect operators and the defect indices are a measure of the non-unitarity ofT. A contractionTon a Hilbert space can be canonically decomposed into an orthogonal direct sum whereUis a unitary operator and Γ iscompletely non-unitaryin the sense that it has no non-zeroreducing subspaceson which its restriction is unitary. IfU= 0,Tis said to be acompletely non-unitary contraction. A special case of this decomposition is theWold decompositionfor anisometry, where Γ is a proper isometry. Contractions on Hilbert spaces can be viewed as the operator analogs of cos θ and are calledoperator anglesin some contexts. The explicit description of contractions leads to (operator-)parametrizations of positive and unitary matrices. Sz.-Nagy's dilation theorem, proved in 1953, states that for any contractionTon a Hilbert spaceH, there is aunitary operatorUon a larger Hilbert spaceK⊇Hsuch that ifPis the orthogonal projection ofKontoHthenTn=PUnPfor alln> 0. The operatorUis called adilationofTand is uniquely determined ifUis minimal, i.e.Kis the smallest closed subspace invariant underUandU* containingH. In fact define[1] the orthogonal direct sum of countably many copies ofH. LetVbe the isometry onH{\displaystyle {\mathcal {H}}}defined by Let Define a unitaryWonK{\displaystyle {\mathcal {K}}}by Wis then a unitary dilation ofTwithHconsidered as the first component ofH⊂K{\displaystyle {\mathcal {H}}\subset {\mathcal {K}}}. The minimal dilationUis obtained by taking the restriction ofWto the closed subspace generated by powers ofWapplied toH. There is an alternative proof of Sz.-Nagy's dilation theorem, which allows significant generalization.[2] LetGbe a group,U(g) a unitary representation ofGon a Hilbert spaceKandPan orthogonal projection onto a closed subspaceH=PKofK. The operator-valued function with values in operators onKsatisfies the positive-definiteness condition where Moreover, Conversely, every operator-valued positive-definite function arises in this way. Recall that every (continuous) scalar-valued positive-definite function on a topological group induces an inner product and group representation φ(g) = 〈Ugv,v〉 whereUgis a (strongly continuous) unitary representation (seeBochner's theorem). Replacingv, a rank-1 projection, by a general projection gives the operator-valued statement. In fact the construction is identical; this is sketched below. LetH{\displaystyle {\mathcal {H}}}be the space of functions onGof finite support with values inHwith inner product Gacts unitarily onH{\displaystyle {\mathcal {H}}}by Moreover,Hcan be identified with a closed subspace ofH{\displaystyle {\mathcal {H}}}using the isometric embedding sendingvinHtofvwith IfPis the projection ofH{\displaystyle {\mathcal {H}}}ontoH, then using the above identification. WhenGis a separable topological group, Φ is continuous in the strong (or weak)operator topologyif and only ifUis. In this case functions supported on a countable dense subgroup ofGare dense inH{\displaystyle {\mathcal {H}}}, so thatH{\displaystyle {\mathcal {H}}}is separable. WhenG=Zany contraction operatorTdefines such a function Φ through forn> 0. The above construction then yields a minimal unitary dilation. The same method can be applied to prove a second dilation theorem of Sz._Nagy for a one-parameter strongly continuous contraction semigroupT(t) (t≥ 0) on a Hilbert spaceH.Cooper (1947)had previously proved the result for one-parameter semigroups of isometries,[3] The theorem states that there is a larger Hilbert spaceKcontainingHand a unitary representationU(t) ofRsuch that and the translatesU(t)HgenerateK. In factT(t) defines a continuous operator-valued positove-definite function Φ onRthrough fort> 0. Φ is positive-definite on cyclic subgroups ofR, by the argument forZ, and hence onRitself by continuity. The previous construction yields a minimal unitary representationU(t) and projectionP. TheHille–Yosida theoremassigns a closedunbounded operatorAto every contractive one-parameter semigroupT'(t) through where the domain onAconsists of all ξ for which this limit exists. Ais called thegeneratorof the semigroup and satisfies on its domain. WhenAis a self-adjoint operator in the sense of thespectral theoremand this notation is used more generally in semigroup theory. Thecogeneratorof the semigroup is the contraction defined by Acan be recovered fromTusing the formula In particular a dilation ofTonK⊃Himmediately gives a dilation of the semigroup.[4] LetTbe totally non-unitary contraction onH. Then the minimal unitary dilationUofTonK⊃His unitarily equivalent to a direct sum of copies the bilateral shift operator, i.e. multiplication byzon L2(S1).[5] IfPis the orthogonal projection ontoHthen forfin L∞= L∞(S1) it follows that the operatorf(T) can be defined by Let H∞be the space of bounded holomorphic functions on the unit diskD. Any such function has boundary values in L∞and is uniquely determined by these, so that there is an embedding H∞⊂ L∞. Forfin H∞,f(T) can be defined without reference to the unitary dilation. In fact if for |z| < 1, then forr< 1 is holomorphic on |z| < 1/r. In that casefr(T) is defined by the holomorphic functional calculus andf(T) can be defined by The map sendingftof(T) defines an algebra homomorphism of H∞into bounded operators onH. Moreover, if then This map has the following continuity property: if a uniformly bounded sequencefntends almost everywhere tof, thenfn(T) tends tof(T) in the strong operator topology. Fort≥ 0, letetbe the inner function IfTis the cogenerator of a one-parameter semigroup of completely non-unitary contractionsT(t), then and A completely non-unitary contractionTis said to belong to the class C0if and only iff(T) = 0 for some non-zerofin H∞. In this case the set of suchfforms an ideal in H∞. It has the form φ ⋅ H∞wheregis aninner function, i.e. such that |φ| = 1 onS1: φ is uniquely determined up to multiplication by a complex number of modulus 1 and is called theminimal functionofT. It has properties analogous to theminimal polynomialof a matrix. The minimal function φ admits a canonical factorization where |c|=1,B(z) is aBlaschke product with andP(z) is holomorphic with non-negative real part inD. By theHerglotz representation theorem, for some non-negative finite measure μ on the circle: in this case, if non-zero, μ must besingularwith respect to Lebesgue measure. In the above decomposition of φ, either of the two factors can be absent. The minimal function φ determines thespectrumofT. Within the unit disk, the spectral values are the zeros of φ. There are at most countably many such λi, all eigenvalues ofT, the zeros ofB(z). A point of the unit circle does not lie in the spectrum ofTif and only if φ has a holomorphic continuation to a neighborhood of that point. φ reduces to a Blaschke product exactly whenHequals the closure of the direct sum (not necessarily orthogonal) of the generalized eigenspaces[6] Two contractionsT1andT2are said to bequasi-similarwhen there are bounded operatorsA,Bwith trivial kernel and dense range such that The following properties of a contractionTare preserved under quasi-similarity: Two quasi-similar C0contractions have the same minimal function and hence the same spectrum. Theclassification theoremfor C0contractions states that two multiplicity free C0contractions are quasi-similar if and only if they have the same minimal function (up to a scalar multiple).[7] A model for multiplicity free C0contractions with minimal function φ is given by taking where H2is theHardy spaceof the circle and lettingTbe multiplication byz.[8] Such operators are calledJordan blocksand denotedS(φ). As a generalization ofBeurling's theorem, the commutant of such an operator consists exactly of operators ψ(T) with ψ inH≈, i.e. multiplication operators onH2corresponding to functions inH≈. A C0contraction operatorTis multiplicity free if and only if it is quasi-similar to a Jordan block (necessarily corresponding the one corresponding to its minimal function). Examples. with the λi's distinct, of modulus less than 1, such that and (ei) is an orthonormal basis, thenS, and henceT, is C0and multiplicity free. HenceHis the closure of direct sum of the λi-eigenspaces ofT, each having multiplicity one. This can also be seen directly using the definition of quasi-similarity. Classification theorem for C0contractions:Every C0contraction is canonically quasi-similar to a direct sum of Jordan blocks. In fact every C0contraction is quasi-similar to a unique operator of the form where the φnare uniquely determined inner functions, with φ1the minimal function ofSand henceT.[10]
https://en.wikipedia.org/wiki/Contraction_(operator_theory)
Transact-SQL(T-SQL) isMicrosoft's andSybase's proprietary extension to theSQL(Structured Query Language) used to interact withrelational databases. T-SQL expands on the SQL standard to includeprocedural programming,local variables, various support functions for string processing, date processing, mathematics, etc. and changes to theDELETEandUPDATEstatements. Transact-SQL is central to usingMicrosoft SQL Server. All applications that communicate with an instance of SQL Server do so by sending Transact-SQL statements to the server, regardless of the user interface of the application. Stored proceduresin SQL Server are executable server-side routines. The advantage of stored procedures is the ability to pass parameters. Transact-SQL provides the following statements to declare and set local variables:DECLARE,SETandSELECT. Keywords for flow control in Transact-SQL includeBEGINandEND,BREAK,CONTINUE,GOTO,IFandELSE,RETURN,WAITFOR, andWHILE. IFandELSEallow conditional execution. This batch statement will print "It is the weekend" if the current date is a weekend day, or "It is a weekday" if the current date is a weekday. (Note: This code assumes that Sunday is configured as the first day of the week in the@@DATEFIRSTsetting.) BEGINandENDmark ablock of statements. If more than one statement is to be controlled by the conditional in the example above, we can useBEGINandENDlike this: WAITFORwill wait for a given amount of time, or until a particular time of day. The statement can be used for delays or to block execution until the set time. RETURNis used to immediately return from astored procedureor function. BREAKends the enclosingWHILEloop, whileCONTINUEcauses the next iteration of the loop to execute. An example of aWHILEloop is given below. In Transact-SQL, both theDELETEandUPDATEstatements are enhanced to enable data from another table to be used in the operation, without needing a subquery: This example deletes alluserswho have been flagged in theuser_flagstable with the 'idle' flag. BULKis a Transact-SQL statement that implements a bulk data-loading process, inserting multiple rows into a table, reading data from an external sequential file. Use ofBULK INSERTresults in better performance than processes that issue individualINSERTstatements for each row to be added. Additional details are availablein MSDN. Beginning with SQL Server 2005,[1]Microsoft introduced additionalTRY CATCHlogic to support exception type behaviour. This behaviour enables developers to simplify their code and leave out@@ERRORchecking after each SQL execution statement.
https://en.wikipedia.org/wiki/Transact-SQL
Ingraph theory, a mathematical discipline, afactor-critical graph(orhypomatchable graph[1][2]) is agraphwith anodd numberof vertices in which deleting one vertex in every possible way results in a graph with aperfect matching, a way of grouping the remaining vertices into adjacent pairs. A matching of all but one vertex of a graph is called anear-perfect matching. So equivalently, a factor-critical graph is a graph in which there are near-perfect matchings that avoid every possible vertex. Factor-critical graphs may be characterized in several different ways, other than their definition as graphs in which each vertex deletion allows for a perfect matching: Any odd-lengthcycle graphis factor-critical,[1]as is anycomplete graphwith an odd number of vertices.[7]More generally, whenever a graph has an odd number of vertices and contains aHamiltonian cycle, it is factor-critical. In such a graph, the near-perfect matchings can be obtained by removing one vertex from the cycle and choosing matched edges in alternation along the remaining path. Thefriendship graphs(graphs formed by connecting a collection of triangles at a single common vertex) provide examples of graphs that are factor-critical but do not have Hamiltonian cycles. If a graphGis factor-critical, then so is theMycielskianofG. For instance, theGrötzsch graph, the Mycielskian of a five-vertex cycle-graph, is factor-critical.[8] Every2-vertex-connectedclaw-free graphwith an odd number of vertices is factor-critical, because removing any vertex will leave a connected claw-free graph with an even number of vertices, and these always have a perfect matching.[9]Examples include the 5-vertex graph of asquare pyramidand the 11-vertex graph of thegyroelongated pentagonal pyramid. Factor-critical graphs must always have an odd number of vertices, and must be2-edge-connected(that is, they cannot have anybridges).[10]However, they are not necessarily2-vertex-connected; the friendship graphs provide a counterexample. It is not possible for a factor-critical graph to bebipartite, because in a bipartite graph with a near-perfect matching, the only vertices that can be deleted to produce a perfectly matchable graph are the ones on the larger side of the bipartition. Every 2-vertex-connected factor-critical graph withmedges has at leastmdifferent near-perfect matchings, and more generally every factor-critical graph withmedges andcblocks (2-vertex-connected components) has at leastm−c+ 1different near-perfect matchings. The graphs for which these bounds are tight may be characterized by having odd ear decompositions of a specific form.[7] Any connected graph may be transformed into a factor-critical graph bycontractingsufficiently many of its edges. Theminimalsets of edges that need to be contracted to make a given graphGfactor-critical form the bases of amatroid, a fact that implies that agreedy algorithmmay be used to find the minimum weight set of edges to contract to make a graph factor-critical, inpolynomial time.[11] Ablossomis a factor-criticalsubgraphof a larger graph. Blossoms play a key role inJack Edmonds'algorithmsformaximum matchingand minimum weight perfect matching in non-bipartite graphs.[12] Inpolyhedral combinatorics, factor-critical graphs play an important role in describing facets of thematching polytopeof a given graph.[1][2] A graph is said to bek-factor-critical if every subset ofn−kvertices has a perfect matching. Under this definition, a hypomatchable graph is 1-factor-critical.[13]Even more generally, a graph is(r,k)-factor-critical if every subset ofn−kvertices has anr-factor, that is, it is the vertex set of anr-regular subgraphof the given graph. Acritical graph(without qualification) is usually assumed to mean a graph for which removing each of its vertices reduces the number of colors it needs in agraph coloring. The concept of criticality has been used much more generally in graph theory to refer to graphs for which removing each possible vertex changes or does not change some relevant property of the graph. Amatching-critical graphis a graph for which the removal of any vertex does not change the size of amaximum matching; by Gallai's characterization, the matching-critical graphs are exactly the graphs in which every connected component is factor-critical.[14]Thecomplement graphof a critical graph is necessarily matching-critical, a fact that was used by Gallai to prove lower bounds on the number of vertices in a critical graph.[15] Beyond graph theory, the concept of factor-criticality has been extended tomatroidsby defining a type of ear decomposition on matroids and defining a matroid to be factor-critical if it has an ear decomposition in which all ears are odd.[16]
https://en.wikipedia.org/wiki/Factor-critical_graph
Biological databasesare stores of biological information.[1]The journalNucleic Acids Researchregularly publishes special issues on biological databases and has a list of such databases. The 2018 issue has a list of about 180 such databases and updates to previously described databases.[2]Omics Discovery Indexcan be used to browse and search several biological databases. Furthermore, theNIAID Data Ecosystem Discovery Portaldeveloped by theNational Institute of Allergy and Infectious Diseases (NIAID)enables searching across databases. Meta databases are databases of databases that collect data about data to generate new data. They are capable of merging information from different sources and making it available in a new and more convenient form, or with an emphasis on a particular disease or organism. Originally, metadata was only a common term referring simply todata about datasuch as tags, keywords, and markup headers. Model organism databasesprovide in-depth biological data for intensively studied organisms. The primary databases make up theInternational Nucleotide Sequence Database(INSD). The include: DDBJ (Japan), GenBank (USA) and European Nucleotide Archive (Europe) are repositories for nucleotidesequencedata from allorganisms. All three accept nucleotide sequence submissions, and then exchange new and updated data on a daily basis to achieve optimal synchronisation between them. These three databases are primary databases, as they house original sequence data. They collaborate withSequence Read Archive(SRA), which archives raw reads from high-throughput sequencing instruments. Secondary databases are:[clarification needed] Other databases Generic gene expression databases Microarray gene expression databases These databases collectgenomesequences, annotate and analyze them, and provide public access. Some addcurationof experimental literature to improve computed annotations. These databases may hold many species genomes, or a singlemodel organismgenome. (See also:List of proteins in the human body) Several publicly available data repositories and resources have been developed to support and manageproteinrelated information, biological knowledge discovery and data-driven hypothesis generation.[15]The databases in the table below are selected from the databases listed in theNucleic Acids Research (NAR)databases issues and database collection and the databases cross-referenced in theUniProtKB. Most of these databases are cross-referenced withUniProt/UniProtKB so that identifiers can be mapped to each other.[15] Proteins in human: There are about ~20,000 protein coding genes in the standard human genome. (Roughly ~1200 already haveWikipedia articles- theGene Wiki- about them) if we are Including splice variants, there could be as many as 500,000 unique human proteins[16] Numerous databases collect information aboutspeciesand othertaxonomiccategories. The Catalogue of Life is a special case as it is a meta-database of about 150 specialized "global species databases" (GSDs) that have collected the names and other information on (almost) all described and thus "known" species. Images play a critical role in biomedicine, ranging from images ofanthropologicalspecimens tozoology. However, there are relatively few databases dedicated to image collection, although some projects such asiNaturalistcollect photos as a main part of their data. A special case of "images" are 3-dimensional images such asprotein structuresor3D-reconstructionsof anatomical structures. Image databases include, among others:[22]
https://en.wikipedia.org/wiki/List_of_biological_databases
Inmathematics, thelimit inferiorandlimit superiorof asequencecan be thought of aslimiting(that is, eventual and extreme) bounds on the sequence. They can be thought of in a similar fashion for afunction(seelimit of a function). For aset, they are theinfimum and supremumof the set'slimit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also calledinfimum limit,limit infimum,liminf,inferior limit,lower limit, orinner limit; limit superior is also known assupremum limit,limit supremum,limsup,superior limit,upper limit, orouter limit. The limit inferior of a sequence(xn){\displaystyle (x_{n})}is denoted bylim infn→∞xnorlim_n→∞⁡xn,{\displaystyle \liminf _{n\to \infty }x_{n}\quad {\text{or}}\quad \varliminf _{n\to \infty }x_{n},}and the limit superior of a sequence(xn){\displaystyle (x_{n})}is denoted bylim supn→∞xnorlim¯n→∞⁡xn.{\displaystyle \limsup _{n\to \infty }x_{n}\quad {\text{or}}\quad \varlimsup _{n\to \infty }x_{n}.} Thelimit inferiorof a sequence (xn) is defined bylim infn→∞xn:=limn→∞(infm≥nxm){\displaystyle \liminf _{n\to \infty }x_{n}:=\lim _{n\to \infty }\!{\Big (}\inf _{m\geq n}x_{m}{\Big )}}orlim infn→∞xn:=supn≥0infm≥nxm=sup{inf{xm:m≥n}:n≥0}.{\displaystyle \liminf _{n\to \infty }x_{n}:=\sup _{n\geq 0}\,\inf _{m\geq n}x_{m}=\sup \,\{\,\inf \,\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.} Similarly, thelimit superiorof (xn) is defined bylim supn→∞xn:=limn→∞(supm≥nxm){\displaystyle \limsup _{n\to \infty }x_{n}:=\lim _{n\to \infty }\!{\Big (}\sup _{m\geq n}x_{m}{\Big )}}orlim supn→∞xn:=infn≥0supm≥nxm=inf{sup{xm:m≥n}:n≥0}.{\displaystyle \limsup _{n\to \infty }x_{n}:=\inf _{n\geq 0}\,\sup _{m\geq n}x_{m}=\inf \,\{\,\sup \,\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.} Alternatively, the notationslim_n→∞⁡xn:=lim infn→∞xn{\displaystyle \varliminf _{n\to \infty }x_{n}:=\liminf _{n\to \infty }x_{n}}andlim¯n→∞⁡xn:=lim supn→∞xn{\displaystyle \varlimsup _{n\to \infty }x_{n}:=\limsup _{n\to \infty }x_{n}}are sometimes used. The limits superior and inferior can equivalently be defined using the concept of subsequential limits of the sequence(xn){\displaystyle (x_{n})}.[1]An elementξ{\displaystyle \xi }of theextended real numbersR¯{\displaystyle {\overline {\mathbb {R} }}}is asubsequential limitof(xn){\displaystyle (x_{n})}if there exists a strictly increasing sequence ofnatural numbers(nk){\displaystyle (n_{k})}such thatξ=limk→∞xnk{\displaystyle \xi =\lim _{k\to \infty }x_{n_{k}}}. IfE⊆R¯{\displaystyle E\subseteq {\overline {\mathbb {R} }}}is the set of all subsequential limits of(xn){\displaystyle (x_{n})}, then and If the terms in the sequence arereal numbers, the limit superior and limit inferior always exist, as the real numbers together with ±∞ (i.e. theextended real number line) arecomplete. More generally, these definitions make sense in anypartially ordered set, provided thesupremaandinfimaexist, such as in acomplete lattice. Whenever the ordinary limit exists, the limit inferior and limit superior are both equal to it; therefore, each can be considered a generalization of the ordinary limit which is primarily interesting in cases where the limit doesnotexist. Whenever lim infxnand lim supxnboth exist, we have The limits inferior and superior are related tobig-O notationin that they bound a sequence only "in the limit"; the sequence may exceed the bound. However, with big-O notation the sequence can only exceed the bound in a finite prefix of the sequence, whereas the limit superior of a sequence like e−nmay actually be less than all elements of the sequence. The only promise made is that some tail of the sequence can be bounded above by the limit superior plus an arbitrarily small positive constant, and bounded below by the limit inferior minus an arbitrarily small positive constant. The limit superior and limit inferior of a sequence are a special case of those of a function (see below). Inmathematical analysis, limit superior and limit inferior are important tools for studying sequences ofreal numbers. Since the supremum and infimum of anunbounded setof real numbers may not exist (the reals are not a complete lattice), it is convenient to consider sequences in theaffinely extended real number system: we add the positive and negative infinities to the real line to give the completetotally ordered set[−∞,∞], which is a complete lattice. Consider a sequence(xn){\displaystyle (x_{n})}consisting of real numbers. Assume that the limit superior and limit inferior are real numbers (so, not infinite). The relationship of limit inferior and limit superior for sequences of real numbers is as follows:lim supn→∞(−xn)=−lim infn→∞xn{\displaystyle \limsup _{n\to \infty }\left(-x_{n}\right)=-\liminf _{n\to \infty }x_{n}} As mentioned earlier, it is convenient to extendR{\displaystyle \mathbb {R} }to[−∞,∞].{\displaystyle [-\infty ,\infty ].}Then,(xn){\displaystyle \left(x_{n}\right)}in[−∞,∞]{\displaystyle [-\infty ,\infty ]}convergesif and only iflim infn→∞xn=lim supn→∞xn{\displaystyle \liminf _{n\to \infty }x_{n}=\limsup _{n\to \infty }x_{n}}in which caselimn→∞xn{\displaystyle \lim _{n\to \infty }x_{n}}is equal to their common value. (Note that when working just inR,{\displaystyle \mathbb {R} ,}convergence to−∞{\displaystyle -\infty }or∞{\displaystyle \infty }would not be considered as convergence.) Since the limit inferior is at most the limit superior, the following conditions holdlim infn→∞xn=∞implieslimn→∞xn=∞,lim supn→∞xn=−∞implieslimn→∞xn=−∞.{\displaystyle {\begin{alignedat}{4}\liminf _{n\to \infty }x_{n}&=\infty &&\;\;{\text{ implies }}\;\;\lim _{n\to \infty }x_{n}=\infty ,\\[0.3ex]\limsup _{n\to \infty }x_{n}&=-\infty &&\;\;{\text{ implies }}\;\;\lim _{n\to \infty }x_{n}=-\infty .\end{alignedat}}} IfI=lim infn→∞xn{\displaystyle I=\liminf _{n\to \infty }x_{n}}andS=lim supn→∞xn{\displaystyle S=\limsup _{n\to \infty }x_{n}}, then the interval[I,S]{\displaystyle [I,S]}need not contain any of the numbersxn,{\displaystyle x_{n},}but every slight enlargement[I−ϵ,S+ϵ],{\displaystyle [I-\epsilon ,S+\epsilon ],}for arbitrarily smallϵ>0,{\displaystyle \epsilon >0,}will containxn{\displaystyle x_{n}}for all but finitely many indicesn.{\displaystyle n.}In fact, the interval[I,S]{\displaystyle [I,S]}is the smallest closed interval with this property. We can formalize this property like this: there existsubsequencesxkn{\displaystyle x_{k_{n}}}andxhn{\displaystyle x_{h_{n}}}ofxn{\displaystyle x_{n}}(wherekn{\displaystyle k_{n}}andhn{\displaystyle h_{n}}are increasing) for which we havelim infn→∞xn+ϵ>xhnxkn>lim supn→∞xn−ϵ{\displaystyle \liminf _{n\to \infty }x_{n}+\epsilon >x_{h_{n}}\;\;\;\;\;\;\;\;\;x_{k_{n}}>\limsup _{n\to \infty }x_{n}-\epsilon } On the other hand, there exists an0∈N{\displaystyle n_{0}\in \mathbb {N} }so that for alln≥n0{\displaystyle n\geq n_{0}}lim infn→∞xn−ϵ<xn<lim supn→∞xn+ϵ{\displaystyle \liminf _{n\to \infty }x_{n}-\epsilon <x_{n}<\limsup _{n\to \infty }x_{n}+\epsilon } To recapitulate: Conversely, it can also be shown that: In general,infnxn≤lim infn→∞xn≤lim supn→∞xn≤supnxn.{\displaystyle \inf _{n}x_{n}\leq \liminf _{n\to \infty }x_{n}\leq \limsup _{n\to \infty }x_{n}\leq \sup _{n}x_{n}.}The liminf and limsup of a sequence are respectively the smallest and greatestcluster points.[3] Analogously, the limit inferior satisfiessuperadditivity:lim infn→∞(an+bn)≥lim infn→∞an+lim infn→∞bn.{\displaystyle \liminf _{n\to \infty }\,(a_{n}+b_{n})\geq \liminf _{n\to \infty }a_{n}+\ \liminf _{n\to \infty }b_{n}.}In the particular case that one of the sequences actually converges, sayan→a,{\displaystyle a_{n}\to a,}then the inequalities above become equalities (withlim supn→∞an{\displaystyle \limsup _{n\to \infty }a_{n}}orlim infn→∞an{\displaystyle \liminf _{n\to \infty }a_{n}}being replaced bya{\displaystyle a}). hold whenever the right-hand side is not of the form0⋅∞.{\displaystyle 0\cdot \infty .} Iflimn→∞an=A{\displaystyle \lim _{n\to \infty }a_{n}=A}exists (including the caseA=+∞{\displaystyle A=+\infty }) andB=lim supn→∞bn,{\displaystyle B=\limsup _{n\to \infty }b_{n},}thenlim supn→∞(anbn)=AB{\displaystyle \limsup _{n\to \infty }\left(a_{n}b_{n}\right)=AB}provided thatAB{\displaystyle AB}is not of the form0⋅∞.{\displaystyle 0\cdot \infty .} Assume that a function is defined from asubsetof the real numbers to the real numbers. As in the case for sequences, the limit inferior and limit superior are always well-defined if we allow the values +∞ and −∞; in fact, if both agree then the limit exists and is equal to their common value (again possibly including the infinities). For example, givenf(x)=sin⁡(1/x){\displaystyle f(x)=\sin(1/x)}, we havelim supx→0f(x)=1{\displaystyle \limsup _{x\to 0}f(x)=1}andlim infx→0f(x)=−1{\displaystyle \liminf _{x\to 0}f(x)=-1}. The difference between the two is a rough measure of how "wildly" the function oscillates, and in observation of this fact, it is called theoscillationoffat 0. This idea of oscillation is sufficient to, for example, characterizeRiemann-integrablefunctions ascontinuousexcept on a set ofmeasure zero.[5]Note that points of nonzero oscillation (i.e., points at whichfis "badly behaved") are discontinuities which, unless they make up a set of zero, are confined to a negligible set. There is a notion of limsup and liminf for functions defined on ametric spacewhose relationship to limits of real-valued functions mirrors that of the relation between the limsup, liminf, and the limit of a real sequence. Take a metric spaceX{\displaystyle X}, a subspaceE{\displaystyle E}contained inX{\displaystyle X}, and a functionf:E→R{\displaystyle f:E\to \mathbb {R} }. Define, for anylimit pointa{\displaystyle a}ofE{\displaystyle E}, lim supx→af(x)=limε→0(sup{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \limsup _{x\to a}f(x)=\lim _{\varepsilon \to 0}\left(\sup \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)}and lim infx→af(x)=limε→0(inf{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \liminf _{x\to a}f(x)=\lim _{\varepsilon \to 0}\left(\inf \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)} whereB(a,ε){\displaystyle B(a,\varepsilon )}denotes themetric ballof radiusε{\displaystyle \varepsilon }abouta{\displaystyle a}. Note that asεshrinks, the supremum of the function over the ball isnon-increasing(strictly decreasing or remaining the same), so we have lim supx→af(x)=infε>0(sup{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \limsup _{x\to a}f(x)=\inf _{\varepsilon >0}\left(\sup \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)}and similarlylim infx→af(x)=supε>0(inf{f(x):x∈E∩B(a,ε)∖{a}}).{\displaystyle \liminf _{x\to a}f(x)=\sup _{\varepsilon >0}\left(\inf \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right).} This finally motivates the definitions for generaltopological spaces. TakeX,Eandaas before, but now letXbe a topological space. In this case, we replace metric balls withneighborhoods: (there is a way to write the formula using "lim" usingnetsand theneighborhood filter). This version is often useful in discussions ofsemi-continuitywhich crop up in analysis quite often. An interesting note is that this version subsumes the sequential version by considering sequences as functions from the natural numbers as a topological subspace of the extended real line, into the space (the closure ofNin [−∞,∞], theextended real number line, isN∪ {∞}.) Thepower set℘(X) of asetXis acomplete latticethat is ordered byset inclusion, and so the supremum and infimum of any set of subsets (in terms of set inclusion) always exist. In particular, every subsetYofXis bounded above byXand below by theempty set∅ because ∅ ⊆Y⊆X. Hence, it is possible (and sometimes useful) to consider superior and inferior limits of sequences in ℘(X) (i.e., sequences of subsets ofX). There are two common ways to define the limit of sequences of sets. In both cases: The difference between the two definitions involves how thetopology(i.e., how to quantify separation) is defined. In fact, the second definition is identical to the first when thediscrete metricis used to induce the topology onX. A sequence of sets in ametrizable spaceX{\displaystyle X}approaches a limiting set when the elements of each member of the sequence approach the elements of the limiting set. In particular, if(Xn){\displaystyle (X_{n})}is a sequence of subsets ofX,{\displaystyle X,}then: The limitlimXn{\displaystyle \lim X_{n}}exists if and only iflim infXn{\displaystyle \liminf X_{n}}andlim supXn{\displaystyle \limsup X_{n}}agree, in which caselimXn=lim supXn=lim infXn.{\displaystyle \lim X_{n}=\limsup X_{n}=\liminf X_{n}.}[6]The outer and inner limits should not be confused with theset-theoretic limitssuperior and inferior, as the latter sets are not sensitive to the topological structure of the space. This is the definition used inmeasure theoryandprobability. Further discussion and examples from the set-theoretic point of view, as opposed to the topological point of view discussed below, are atset-theoretic limit. By this definition, a sequence of sets approaches a limiting set when the limiting set includes elements which are in all except finitely many sets of the sequenceanddoes not include elements which are in all except finitely many complements of sets of the sequence. That is, this case specializes the general definition when the topology on setXis induced from thediscrete metric. Specifically, for pointsx,y∈X, the discrete metric is defined by under which a sequence of points (xk) converges to pointx∈Xif and only ifxk=xfor all but finitely manyk. Therefore,if the limit set existsit contains the points and only the points which are in all except finitely many of the sets of the sequence. Since convergence in the discrete metric is the strictest form of convergence (i.e., requires the most), this definition of a limit set is the strictest possible. If (Xn) is a sequence of subsets ofX, then the following always exist: Observe thatx∈ lim supXnif and only ifx∉ lim infXnc. In this sense, the sequence has a limit so long as every point inXeither appears in all except finitely manyXnor appears in all except finitely manyXnc.[7] Using the standard parlance of set theory,set inclusionprovides apartial orderingon the collection of all subsets ofXthat allows set intersection to generate a greatest lower bound and set union to generate a least upper bound. Thus, the infimum ormeetof a collection of subsets is the greatest lower bound while the supremum orjoinis the least upper bound. In this context, the inner limit, lim infXn, is thelargest meeting of tailsof the sequence, and the outer limit, lim supXn, is thesmallest joining of tailsof the sequence. The following makes this precise. The following are several set convergence examples. They have been broken into sections with respect to the metric used to induce the topology on setX. The above definitions are inadequate for many technical applications. In fact, the definitions above are specializations of the following definitions. The limit inferior of a setX⊆Yis theinfimumof all of thelimit pointsof the set. That is, Similarly, the limit superior ofXis thesupremumof all of the limit points of the set. That is, Note that the setXneeds to be defined as a subset of apartially ordered setYthat is also atopological spacein order for these definitions to make sense. Moreover, it has to be acomplete latticeso that the suprema and infima always exist. In that case every set has a limit superior and a limit inferior. Also note that the limit inferior and the limit superior of a set do not have to be elements of the set. Take atopological spaceXand afilter baseBin that space. The set of allcluster pointsfor that filter base is given by whereB¯0{\displaystyle {\overline {B}}_{0}}is theclosureofB0{\displaystyle B_{0}}. This is clearly aclosed setand is similar to the set of limit points of a set. Assume thatXis also apartially ordered set. The limit superior of the filter baseBis defined as when that supremum exists. WhenXhas atotal order, is acomplete latticeand has theorder topology, Similarly, the limit inferior of the filter baseBis defined as when that infimum exists; ifXis totally ordered, is a complete lattice, and has the order topology, then If the limit inferior and limit superior agree, then there must be exactly one cluster point and the limit of the filter base is equal to this unique cluster point. Note that filter bases are generalizations ofnets, which are generalizations ofsequences. Therefore, these definitions give the limit inferior andlimit superiorof any net (and thus any sequence) as well. For example, take topological spaceX{\displaystyle X}and the net(xα)α∈A{\displaystyle (x_{\alpha })_{\alpha \in A}}, where(A,≤){\displaystyle (A,{\leq })}is adirected setandxα∈X{\displaystyle x_{\alpha }\in X}for allα∈A{\displaystyle \alpha \in A}. The filter base ("of tails") generated by this net isB{\displaystyle B}defined by Therefore, the limit inferior and limit superior of the net are equal to the limit superior and limit inferior ofB{\displaystyle B}respectively. Similarly, for topological spaceX{\displaystyle X}, take the sequence(xn){\displaystyle (x_{n})}wherexn∈X{\displaystyle x_{n}\in X}for anyn∈N{\displaystyle n\in \mathbb {N} }. The filter base ("of tails") generated by this sequence isC{\displaystyle C}defined by Therefore, the limit inferior and limit superior of the sequence are equal to the limit superior and limit inferior ofC{\displaystyle C}respectively.
https://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior
Don Quixote,[a][b]the full title beingThe Ingenious Gentleman Don Quixote of La Mancha,[c]is a SpanishnovelbyMiguel de Cervantes. The novel, originally published in two parts, in 1605 and 1615 is considered a founding work ofWestern literature. It's often said to be the first modernnovel.[2][3]The novel has been labelled by many well-known authors as the "best novel of all time"[d]and the "best and most central work in world literature".[5][4]Don Quixoteis also one of themost-translated books in the world[6]and one of thebest-selling novels of all time. The plot revolves around the adventures of a member of the lowest nobility, anhidalgo[e]fromLa ManchanamedAlonso Quijano, who reads so manychivalric romancesthat he loses his mind and decides to become aknight-errant(caballero andante) to revivechivalryand serve his nation, under the nameDonQuixote de la Mancha.[b]He recruits as his squire a simple farm labourer,Sancho Panza, who brings an earthy wit to Don Quixote's lofty rhetoric. In the first part of the book, Don Quixote does not see the world for what it is and prefers to imagine that he is living out a knightly story meant for the annals of all time. However, asSalvador de Madariagapointed out in hisGuía del lector del Quijote(1972 [1926]),[7]referring to "the Sanchification of Don Quixote and the Quixotization of Sancho", as "Sancho's spirit ascends from reality to illusion, Don Quixote's declines from illusion to reality".[8] The book had a major influence on the literary community, as evidenced by direct references inAlexandre Dumas'sThe Three Musketeers(1844),[9]andEdmond Rostand'sCyrano de Bergerac(1897)[10]as well as the wordquixotic.Mark Twainreferred to the book as having "swept the world's admiration for the mediaeval chivalry-silliness out of existence".[11][f]It has been described by some as the greatest work ever written.[12][13] For Cervantes and the readers of his day,Don Quixotewas a one-volume book published in 1605, divided internally into four parts, not the first part of a two-part set. The mention in the 1605 book of further adventures yet to be told was totally conventional, did not indicate any authorial plans for a continuation, and was not taken seriously by the book's first readers.[14] Cervantes, in ametafictionalnarrative, writes that the first few chapters were taken from "the archives of La Mancha", and the rest were translated from an Arabic text by theMoorishhistorianCide Hamete Benengeli. Alonso Quixanois ahidalgonearing 50 years of age who lives in a deliberately unspecified region ofLa Manchawith his niece and housekeeper. While he lives a frugal life, he is full of fantasies about chivalry stemming from his obsession with chivalric romance books. Eventually, his obsession becomes madness when he decides to become aknight errant, donning an old suit of armor. He renames himself "Don Quixote", names his old workhorse "Rocinante", and designates Aldonza Lorenzo (a slaughterhouse worker with a famed hand for salting pork) hislady love, renaming herDulcinea del Toboso. As he travels in search of adventure, he arrives at an inn that he believes to be a castle, calls the prostitutes he meets there "ladies", and demands that the innkeeper, whom he takes to be the lord of the castle, dub him a knight. The innkeeper agrees. Quixote starts the night holdingvigilat the inn's horse trough, which Quixote imagines to be a chapel. He then becomes involved in a fight withmuleteerswho try to remove his armor from the horse trough to water their mules. In a pretend ceremony, the innkeeper dubs him a knight to be rid of him and sends him on his way. Quixote next encounters a servant named Andres who is tied to a tree and being beaten by his master over disputed wages. Quixote orders the master to stop the beating, untie Andres and swear to treat his servant fairly. However, the beating is resumed, and redoubled, as soon as Quixote leaves. Quixote then chances upon traders fromToledo. He demands that they agree that Dulcinea del Toboso is the most beautiful woman in the world. One of them demands to see her picture so that he can decide for himself. Enraged, Quixote charges at them but his horse stumbles, causing him to fall. One of the traders beats up Quixote, who is left at the side of the road until a neighboring peasant brings him back home. While Quixote lies unconscious in his bed, his niece, the housekeeper, the parishcurate, and the local barber burn most of his chivalric and other books, seeing them as the root of his madness. They seal up the library room, later telling Quixote that it was done by a wizard. Don Quixote asks his neighbour, the poor farm labourerSancho Panza, to be his squire, promising him a petty governorship. Sancho agrees and they sneak away at dawn. Their adventures together begin with Quixote's attack on some windmills which he believes to be ferocious giants. They next encounter two Benedictinefriarsand, nearby, an unrelated lady in a carriage. Quixote takes the friars to be enchanters who are holding the lady captive, knocks one of them from his horse, and is challenged by an armedBasquetravelling with the company. The combat ends with the lady leaving her carriage and begging him not to harm the Basque. After a friendly encounter with some goatherds and a less friendly one with someYanguesanporters drivingGalician ponies, Quixote and Sancho enter an inn owned by Juan Palomeque, where a mix-up involving a servant girl's romantic rendezvous with another guest results in a brawl. Quixote explains to Sancho that the inn is enchanted. They decide to leave, but Quixote, following the example of the fictional knights, leaves without paying. Sancho ends up wrapped in a blanket and tossed in the air by several mischievous guests at the inn before he manages to follow. After further adventures involving a dead body, a barber's basin that Quixote imagines as the legendary helmet ofMambrino, and a group ofgalley slaves, they wander into theSierra Morena. There they encounter the dejected and mostly mad Cardenio, who relateshis story. Inspired by Cardenio, Quixote decides to imitate what he has read in his chivalric romances and live like a hermit in a display of devotion to Dulcinea. He sends Sancho to deliver a letter to Dulcinea, but instead Sancho finds the barber and priest from his village. They make a plan to trick Quixote into coming home, recruiting Dorotea, a woman they discover in the forest, to pose as the Princess Micomicona, a damsel in distress. The plan works and Quixote and the group return to the inn, though Quixote is now convinced, thanks to a lie told by Sancho when asked about the letter, that Dulcinea wants to see him. At the inn, several other plots intersect and are resolved. Meanwhile, a sleepwalking Quixote does battle with somewineskinswhich he takes to be the giant who stole the princess Micomicona's kingdom. An officer of theSanta Hermandadarrives with a warrant for Quixote's arrest for freeing the galley slaves, but the priest begs for the officer to have mercy on account of Quixote's insanity. The officer agrees and Quixote is locked in a cage which he is made to think is an enchantment. He has a learned conversation with a Toledocanonhe encounters by chance on the road, in which the canon expresses his scorn for untruthful chivalric books, but Don Quixote defends them. The group stops to eat and lets Quixote out of the cage; he gets into a fight with a goatherd and with a group of pilgrims, who beat him into submission, before he is finally brought home. The narrator ends the story by saying that he has found manuscripts of Quixote's further adventures. Although the two parts are now published as a single work,Don Quixote, Part Twowas a sequel published ten years after the original novel. In an early example ofmetafiction, Part Two indicates that several of its characters have read the first part of the novel and are thus familiar with the history and peculiarities of the two protagonists. Don Quixote and Sancho are on their way to El Toboso to meet Dulcinea, with Sancho aware that his story about Dulcinea was a complete fabrication. They reach the city at daybreak and decide to enter at nightfall. However, a bad omen frightens Quixote into retreat and they quickly leave. Sancho is instead sent out alone by Quixote to meet Dulcinea and act as a go-between. Sancho's luck brings three peasant girls along the road and he quickly tells Quixote that they are Dulcinea and her ladies-in-waiting and as beautiful as ever. Since Quixote only sees the peasant girls, Sancho goes on to pretend that an enchantment of some sort is at work. A duke and duchess encounter the duo. These nobles have read Part One of the story and are themselves very fond of books of chivalry. They decide to play along for their own amusement, beginning a string of imagined adventures and practical jokes. As part of one prank, Quixote and Sancho are led to believe that the only way to release Dulcinea from her spell is for Sancho to give himself three thousand three hundred lashes. Sancho naturally resists this course of action, leading to friction with his master. Under the duke's patronage, Sancho eventually gets his promised governorship, though it is false, and he proves to be a wise and practical ruler before all ends in humiliation. Near the end, Don Quixote reluctantly sways towards sanity. Quixote battles the Knight of the White Moon (a young man from Quixote's hometown who had earlier posed as the Knight of Mirrors) on the beach inBarcelona. Defeated, Quixote submits to prearranged chivalric terms: the vanquished must obey the will of the conqueror. He is ordered to lay down his arms and cease his acts of chivalry for a period of one year, by which time his friends and relatives hope he will be cured. On the way back home, Quixote and Sancho "resolve" the disenchantment of Dulcinea. Upon returning to his village, Quixote announces his plan to retire to the countryside as a shepherd, but his housekeeper urges him to stay at home. Soon after, he retires to his bed with a deathly illness, and later awakes from a dream, having fully become Alonso Quixano once more. Sancho tries to restore his faith and his interest in Dulcinea, but Quixano only renounces his previous ambition and apologizes for the harm he has caused. He dictates his will, which includes a provision that his niece will be disinherited if she marries a man who reads books of chivalry. After Quixano dies, the author emphasizes that there are no more adventures to relate and that any further books about Don Quixote would be spurious. Don Quixote, Part Onecontains a number of stories which do not directly involve the two main characters, but which are narrated by some of thepicaresquefigures encountered by the Don and Sancho during their travels. The longest and best known of these is "El Curioso Impertinente" (The Ill-Advised Curiosity), found in Part One, Book Four. This story, read to a group of travelers at an inn, tells of aFlorentinenobleman, Anselmo, who becomes obsessed with testing his wife's fidelity and talks his close friendLothariointo attempting to seduce her, with disastrous results for all. InPart Two, the author acknowledges the criticism of his digressions inPart Oneand promises to concentrate the narrative on the central characters (although at one point he laments that his narrative muse has been constrained in this manner). Nevertheless, "Part Two" contains several back narratives related by peripheral characters. Several abridged editions have been published which delete some or all of the extra tales in order to concentrate on the central narrative.[15] Thestory within a storyrelates that, for no particular reason, Anselmo decides to test the fidelity of his wife, Camilla, and asks his friend, Lothario, to seduce her. Thinking that to be madness, Lothario reluctantly agrees, and soon reports to Anselmo that Camilla is a faithful wife. Anselmo learns that Lothario has lied and attempted no seduction. He makes Lothario promise to try in earnest and leaves town to make this easier. Lothario tries and Camilla writes letters to her husband telling him of the attempts by Lothario and asking him to return. Anselmo makes no reply and does not return. Lothario then falls in love with Camilla, who eventually reciprocates; an affair between them ensues, but is not disclosed to Anselmo, and their affair continues after Anselmo returns. One day, Lothario sees a man leaving Camilla's house and jealously presumes she has taken another lover. He tells Anselmo that, at last, he has been successful and arranges a time and place for Anselmo to see the seduction. Before this rendezvous, however, Lothario learns that the man was the lover of Camilla's maid. He and Camilla then contrive to deceive Anselmo further: When Anselmo watches them, she refuses Lothario, protests her love for her husband, and stabs herself lightly in the breast. Anselmo is reassured of her fidelity. The affair restarts with Anselmo none the wiser. Later, the maid's lover is discovered by Anselmo. Fearing that Anselmo will kill her, the maid says she will tell Anselmo a secret the next day. Anselmo tells Camilla that this is to happen, and Camilla expects that her affair is to be revealed. Lothario and Camilla flee that night. The maid flees the next day. Anselmo searches for them in vain before learning from a stranger of his wife's affair. He starts to write the story, but dies of grief before he can finish. Lothario is killed in battle soon afterward and Camilla dies of grief. The novel's farcical elements make use of punning and similar verbal playfulness. Character-naming inDon Quixotemakes ample figural use of contradiction, inversion, and irony, such as the namesRocinante[16](a reversal) andDulcinea(an allusion to illusion), and the wordquixoteitself, possibly a pun onquijada(jaw) but certainly[17][18]cuixot(Catalan: thighs), a reference to a horse'srump.[19] As a military term, the wordquijoterefers tocuisses, part of a full suit ofplate armourprotecting the thighs. The Spanish suffix-otedenotes the augmentative—for example,grandemeans large, butgrandotemeans extra large, with grotesque connotations. Following this example,Quixotewould suggest 'The Great Quijano', an oxymoronic play on words that makes much sense in light of the character's delusions of grandeur.[20] Cervantes wrote his work inEarly Modern Spanish, heavily borrowing fromOld Spanish, the medieval form of the language. The language ofDon Quixote, although still containingarchaisms, is far more understandable to modern Spanish readers than is, for instance, the completely medieval Spanish of thePoema de mio Cid, a kind of Spanish that is as different from Cervantes' language asMiddle Englishis fromModern English. The Old Castilian language was also used to show the higher class that came with being a knight errant. InDon Quixote, there are basically two different types of Castilian: Old Castilian is spoken only by Don Quixote, while the rest of the roles speak a contemporary (late 16th century) version of Spanish. The Old Castilian of Don Quixote is a humoristic resource—he copies the language spoken in the chivalric books that drove him to madness; and many times when he talks nobody is able to understand him because his language is too old. This humorous effect is more difficult to see nowadays because the reader must be able to distinguish the two old versions of the language, but when the book was published it was much celebrated. (English translations can get some sense of the effect by having Don Quixote useKing James Bibleor Shakespearean English, or evenMiddle English.)[21][22] In Old Castilian, the letterxrepresented the sound writtenshin modern English, so the name was originally pronounced[kiˈʃote]. However, as Old Castilian evolved towards modern Spanish, asound changecaused it to be pronounced with avoiceless velar fricative[x]sound (like theScotsorGermanch), and today the Spanish pronunciation of "Quixote" is[kiˈxote]. The original pronunciation is reflected in languages such asAsturian,Leonese,Galician,Catalan,Italian,Portuguese,TurkishandFrench, where it is pronounced with a "sh" or "ch" sound; the French operaDon Quichotteis one of the best-known modern examples of this pronunciation. Today, English speakers generally attempt something close to the modern Spanish pronunciation ofQuixote(Quijote), as/kiːˈhoʊti/,[1]although the traditional Englishspelling-based pronunciationwith the value of the letter x in modern English is still sometimes used, resulting in/ˈkwɪksət/or/ˈkwɪksoʊt/. InAustralian English, the preferred pronunciation amongst members of the educated classes was/ˈkwɪksət/until well into the 1970s, as part of a tendency for the upper class to "anglicise its borrowing ruthlessly".[23]The traditional English rendering is preserved in the pronunciation of the adjectival formquixotic, i.e.,/kwɪkˈsɒtɪk/,[24][25]defined byMerriam-Websteras the foolishly impractical pursuit of ideals, typically marked by rash and lofty romanticism.[26] Harold BloomsaysDon Quixoteis the first modern novel, and that the protagonist is at war withFreud'sreality principle, which accepts the necessity of dying. Bloom says that the novel has an endless range of meanings, but that a recurring theme is the human need to withstand suffering.[27] Edith Grossman, who wrote and published a highly acclaimed[28]English translation of the novel in 2003, says that the book is mostly meant to move people into emotion using a systematic change of course, on the verge of both tragedy and comedy at the same time. Grossman has stated: The question is that Quixote has multiple interpretations [...] and how do I deal with that in my translation. I'm going to answer your question by avoiding it [...] so when I first started reading the Quixote I thought it was the most tragic book in the world, and I would read it and weep [...] As I grew older [...] my skin grew thicker [...] and so when I was working on the translation I was actually sitting at my computer and laughing out loud. This is done [...] as Cervantes did it [...] by never letting the reader rest. You are never certain that you truly got it. Because as soon as you think you understand something, Cervantes introduces something that contradicts your premise.[29] The novel's structure isepisodicin form. The full title is indicative of the tale's object, asingenioso(Spanish) means "quick with inventiveness",[30]marking the transition of modern literature fromdramaticto thematic unity. The novel takes place over a long period of time, including many adventures united by common themes of the nature of reality, reading, and dialogue in general. Althoughburlesqueon the surface, the novel, especially in its second half, has served as an important thematic source not only in literature but also in much of art and music, inspiring works byPablo PicassoandRichard Strauss. The contrasts between the tall, thin, fancy-struck and idealistic Quixote and the fat diddy, world-weary Panza is a motif echoed ever since the book's publication, and Don Quixote's imaginings are the butt of outrageous and cruel practical jokes in the novel. Even faithful and simple Sancho is forced to deceive him at certain points. The novel is considered a satire oforthodoxy, veracity and even nationalism.[citation needed]In exploring the individualism of his characters, Cervantes helped lead literary practice beyond the narrow convention of thechivalric romance. Hespoofsthe chivalric romance[31]through a straightforward retelling of a series of acts that redound to theknightly virtuesof the hero. The character of Don Quixote became so well known in its time that the wordquixoticwas quickly adopted by many languages. Characters such as Sancho Panza and Don Quixote's steed,Rocinante, are emblems of Western literary culture. The phrase "tilting at windmills" to describe an act of attacking imaginary enemies (or an act of extreme idealism), derives from an iconic scene in the book. It stands in a unique position between medievalromanceand the modern novel. The former consists of disconnected stories featuring the same characters and settings with little exploration of the inner life of even the main character. The latter are usually focused on the psychological evolution of their characters. In Part I, Quixote imposes himself on his environment. By Part II, people know about him through "having read his adventures", and so, he needs to do less to maintain his image. By his deathbed, he has regained his sanity, and is once more "Alonso Quixano the Good". The cave ofMedrano[32](also known as thecasa de Medrano) inArgamasilla de Alba, which has been known since the beginning of the 17th century, and according to the tradition of Argamasilla de Alba, was the prison of Miguel de Cervantes and the place where he conceived and began to write his famous work "Don Quixote de la Mancha."[33][34][35][36][37][38][39] Sources forDon Quixoteinclude the Castilian novelAmadis de Gaula, which had enjoyed great popularity throughout the 16th century. Another prominent source, which Cervantes evidently admires more, isTirant lo Blanch, which the priest describes in Chapter VI ofQuixoteas "the best book in the world." (However, the sense in which it was "best" is much debated among scholars. Since the 19th century, the passage has been called "the most difficult passage ofDon Quixote".) The scene of the book burning provides a list of Cervantes's likes and dislikes about literature. Cervantes makes a number of references to the Italian poemOrlando furioso. In chapter 10 of the first part of the novel, Don Quixote says he must take the magical helmet ofMambrino, an episode from Canto I ofOrlando, and itself a reference toMatteo Maria Boiardo'sOrlando innamorato.[40]The interpolated story in chapter 33 of Part four of the First Part is a retelling of a tale from Canto 43 ofOrlando, regarding a man who tests the fidelity of his wife.[41] Another important source appears to have been Apuleius'sThe Golden Ass, one of the earliest known novels, a picaresque from late classical antiquity. The wineskins episode near the end of the interpolated tale "The Curious Impertinent" in chapter 35 of the first part ofDon Quixoteis a clear reference to Apuleius, and recent scholarship suggests that the moral philosophy and the basic trajectory of Apuleius's novel are fundamental to Cervantes' program.[42]Similarly, many of both Sancho's adventures in Part II and proverbs throughout are taken from popular Spanish and Italian folklore. Cervantes' experiences as agalley slavein Algiers also influencedQuixote.[43] Medical theories may have also influenced Cervantes' literary process. Cervantes had familial ties to the distinguished medical community. His father, Rodrigo de Cervantes, and his great-grandfather, Juan Díaz de Torreblanca, were surgeons. Additionally, his sister, Andrea de Cervantes, was a nurse.[44]He also befriended many individuals involved in the medical field, in that he knew medical author Francisco Díaz, an expert in urology, and royal doctorAntonio Ponce de Santa Cruzwho served as a personal doctor to both Philip III and Philip IV of Spain.[45] Apart from the personal relations Cervantes maintained within the medical field, Cervantes' personal life was defined by an interest in medicine. He frequently visited patients from the Hospital de Inocentes in Sevilla.[44]Furthermore, Cervantes explored medicine in his personal library. His library contained more than 200 volumes and included books likeExamen de Ingenios, byJuan HuarteandPractica y teórica de cirugía, by Dionisio Daza Chacón that defined medical literature and medical theories of his time.[45] Researchers Isabel Sanchez Duque and Francisco Javier Escudero have found that Cervantes was a friend of the family Villaseñor, which was involved in a combat with Francisco de Acuña. Both sides combated disguised as medieval knights in the road fromEl TobosotoMiguel Estebanin 1581. They also found a person called Rodrigo Quijada, who bought the title of nobility of "hidalgo", and created diverse conflicts with the help of a squire.[46][47] It is not certain when Cervantes began writingPart TwoofDon Quixote, but he had probably not proceeded much further than Chapter LIX by late July 1614. In about September, however, a spurious Part Two, entitledSecond Volume of the Ingenious Gentleman Don Quixote of La Mancha: by the Licenciado (doctorate)Alonso Fernández de Avellaneda, ofTordesillas, was published inTarragonaby an unidentifiedAragonesewho was an admirer ofLope de Vega, rival of Cervantes.[48]It was translated intoEnglishby William Augustus Yardley, Esquire in two volumes in 1784. Some modern scholars suggest that Don Quixote's fictional encounter with Avellaneda's book in Chapter 59 of Part II should not be taken as the date that Cervantes encountered it, which may have been much earlier. Avellaneda's identity has been the subject of many theories, but there is no consensus as to who he was. In its prologue, the author gratuitously insulted Cervantes, who took offense and responded; the last half of Chapter LIX and most of the following chapters of Cervantes'sSegunda Partelend some insight into the effects upon him; Cervantes manages to work in some subtle digs at Avellaneda's own work, and in his preface to Part II, comes very near to criticizing Avellaneda directly. In his introduction toThe Portable Cervantes,Samuel Putnam, a noted translator of Cervantes' novel, calls Avellaneda's version "one of the most disgraceful performances in history".[49] The second part of Cervantes'Don Quixote, finished as a direct result of the Avellaneda book, has come to be regarded by some literary critics[50]as superior to the first part, because of its greater depth of characterization, its discussions, mostly between Quixote and Sancho, on diverse subjects, and its philosophical insights. In Cervantes'sSegunda Parte, Don Quixote visits a printing-house in Barcelona and finds Avellaneda'sSecond Partbeing printed there, in an early example ofmetafiction.[51]Don Quixote and Sancho Panza also meet one of the characters from Avellaneda's book, Don Alvaro Tarfe, and make him swear that the "other" Quixote and Sancho are impostors.[52] Cervantes' story takes place on the plains ofLa Mancha, specifically thecomarcaofCampo de Montiel. En un lugar de La Mancha, de cuyo nombre no quiero acordarme, no ha mucho tiempo que vivía un hidalgo de los de lanza en astillero, adarga antigua, rocín flaco y galgo corredor.(Somewhere in La Mancha, in a place whose name I do not care to remember, a gentleman lived not long ago, one of those who has a lance and ancient shield on a shelf and keeps a skinny nag and a greyhound for racing.) The location of the village to which Cervantes alludes in the opening sentence ofDon Quixotehas been the subject of debate since its publication over four centuries ago. Indeed, Cervantes deliberately omits the name of the village, giving an explanation in the final chapter: Such was the end of the Ingenious Gentleman of La Mancha, whose village Cide Hamete would not indicate precisely, in order to leave all the towns and villages of La Mancha to contend among themselves for the right to adopt him and claim him as a son, as the seven cities of Greece contended for Homer. In 2004, a team of academics fromComplutense University, led by Francisco Parra Luna, Manuel Fernández Nieto, and Santiago Petschen Verdaguer, deduced that the village was that ofVillanueva de los Infantes.[53]Their findings were published in a paper titled "'El Quijote' como un sistema de distancias/tiempos: hacia la localización del lugar de la Mancha", which was later published as a book:El enigma resuelto del Quijote. The result was replicated in two subsequent investigations: "La determinación del lugar de la Mancha como problema estadístico" and "The Kinematics of the Quixote and the Identity of the 'Place in La Mancha'".[54][55] Translators ofDon Quixote, such asJohn Ormsby,[56]have commented that the region ofLa Manchais one of the most desertlike, unremarkable regions of Spain, the least romantic and fanciful place that one would imagine as the home of a courageous knight. On the other hand, as Borges points out: I suspect that inDon Quixote, it does not rain a single time. The landscapes described by Cervantes have nothing in common with the landscapes of Castile: they are conventional landscapes, full of meadows, streams, and copses that belong in an Italian novel. The story also takes place inEl Tobosowhere Don Quixote goes to seekDulcinea's blessings. Don Quixoteis said to reflect the Spanish society in which Cervantes lived and wrote.[58]Spain's status as a world powerwas declining, and the Spanish national treasury was bankrupt due to expensive foreign wars.[58]Spanish cultural dominance was also waning as theProtestant Reformationhad put the Spanish Roman Catholic Church on the defensive, which had led to the establishment of theSpanish Inquisition.[58]Meanwhile, thehidalgoclass was losing relevance because of changes in Spanish society which made the high ideals ofchivalryobsolete.[58] In 2002 theNorwegian Nobel Instituteconducted a study among writers from 55 countries, themajorityvotedDon Quixote"the greatest work of fiction ever written".[59] The opening sentence of the book created a classic Spanish cliché with the phrasede cuyo nombre no quiero acordarme("whose name I do not wish to recall"):[60]En un lugar de la Mancha, de cuyo nombre no quiero acordarme, no ha mucho tiempo que vivía un hidalgo de los de lanza en astillero, adarga antigua, rocín flaco y galgo corredor.[61]("In a village of La Mancha, whose name I do not wish to recall, there lived, not very long ago, one of those gentlemen with a lance in the lance-rack, an ancient shield, a skinny old horse, and a fast greyhound.")[62] Don Quixotealongside its many translations, has also provided a number of idioms and expressions to the English language. Examples with their own articles include the phrase "the pot calling the kettle black" and the adjective "quixotic".[63][64] Tilting at windmills is an Englishidiomthat means "attacking imaginary enemies". The expression is derived fromDon Quixote, and the word "tilt" in this context refers tojousting. This phrase is sometimes also expressed as "charging at windmills" or "fighting the windmills".[65] The phrase is sometimes used to describe either confrontations where adversaries are incorrectly perceived, or courses of action that are based on misinterpreted or misapplied heroic, romantic, or idealistic justifications.[66]It may also connote an inopportune, unfounded, and vain effort against adversaries real or imagined.[67] Dulcibella, a deep-sea amphipod species, was named after the character Dulcinea in the novel, following the tradition of naming amphipods after literary figures. In July 1604, Cervantes sold the rights ofEl ingenioso hidalgo don Quixote de la Mancha(known asDon Quixote, Part I) to the publisher-booksellerFrancisco de Roblesfor an unknown sum.[68]License to publish was granted in September, the printing was finished in December, and the book came out on 16 January 1605.[69][70] The novel was an immediate success. Most of the 400 copies of the firsteditionwere sent to theNew World, with the publisher hoping to get a better price in the Americas.[71]Although most of them disappeared in a shipwreck nearLa Havana, approximately 70 copies reachedLima, from where they were sent toCuzco, in the heart of the defunctInca Empire.[71] No sooner was it in the hands of the public than preparations were made to issue derivative (pirated) editions. In 1614 a fake second part was published by a mysterious author under the pen name Avellaneda. This author was never satisfactorily identified. This rushed Cervantes into writing and publishing a genuine second part in 1615, which was a year before his own death.[51]Don Quixotehad been growing in favour, and its author's name was now known beyond thePyrenees. By August 1605, there were two Madrid editions, two published in Lisbon, and one inValencia. Publisher Francisco de Robles secured additional copyrights forAragonand Portugal for a second edition.[72] Sale of these publishing rights deprived Cervantes of further financial profit onPart One. In 1607, an edition was printed inBrussels.Robles, the Madrid publisher, found it necessary to meet demand with a third edition, a seventh publication in all, in 1608. Popularity of the book in Italy was such that a Milan bookseller issued an Italian edition in 1610. Yet another Brussels edition was called for in 1611.[70]Since then, numerous editions have been released and in total, the novel is believed to have sold more than 500 million copies worldwide.[73]The work has been produced in numerous editions and languages, the Cervantes Collection, at theState Library of New South Walesincludes over 1,100 editions. These were collected, byBen Haneman, over a period of thirty years.[74] In 1613, Cervantes published theNovelas ejemplares, dedicated to theMaecenasof the day, theConde de Lemos. Eight and a half years afterPart Onehad appeared came the first hint of a forthcomingSegunda Parte(Part Two). "You shall see shortly", Cervantes says, "the further exploits of Don Quixote and humours of Sancho Panza."[75]Don Quixote, Part Two, published by the same press as its predecessor, appeared late in 1615, and quickly reprinted in Brussels and Valencia (1616) and Lisbon (1617). Parts One and Two were published as one edition in Barcelona in 1617. Historically, Cervantes' work has been said to have "smiledSpain's chivalryaway", suggesting that Don Quixote as a chivalric satire contributed to the demise of Spanish Chivalry.[76] There are many translations of the book, and it has been adapted many times in shortened versions. Many derivative editions were also written at the time, as was the custom of envious or unscrupulous writers. Seven years after theParte Primeraappeared,Don Quixotehad been translated into French, German, Italian, and English, with the first French translation of 'Part II' appearing in 1618, and the first English translation in 1620. One abridged adaptation, authored by Agustín Sánchez, runs slightly over 150 pages, cutting away about 750 pages.[77] Thomas Shelton's English translation of theFirst Partappeared in 1612 while Cervantes was still alive, although there is no evidence that Shelton had met the author. Although Shelton's version is cherished by some, according toJohn OrmsbyandSamuel Putnam, it was far from satisfactory as a carrying over of Cervantes' text.[72]Shelton's translation of the novel'sSecond Partappeared in 1620. Near the end of the 17th century,John Phillips, a nephew of poetJohn Milton, published what Putnam considered the worst English translation. The translation, as literary critics claim, was not based on Cervantes' text but mostly on a French work by Filleau de Saint-Martin and on notes which Thomas Shelton had written. Around 1700, a version byPierre Antoine Motteuxappeared. Motteux's translation enjoyed lasting popularity; it was reprinted as theModern LibrarySeries edition of the novel until recent times.[78]Nonetheless, future translators would find much to fault in Motteux's version: Samuel Putnam criticized "the prevailing slapstick quality of this work, especially whereSancho Panzais involved, the obtrusion of the obscene where it is found in the original, and the slurring of difficulties through omissions or expanding upon the text". John Ormsby considered Motteux's version "worse than worthless", and denounced its "infusion of Cockney flippancy and facetiousness" into the original.[79] The proverb "The proof of the pudding is in the eating" is widely attributed to Cervantes. The Spanish word for pudding (budín), however, does not appear in the original text but premieres in the Motteux translation.[80]In Smollett's translation of 1755 he notes that the original text reads literally "you will see when the eggs are fried", meaning "time will tell".[81] A translation by CaptainJohn Stevens, which revised Thomas Shelton's version, also appeared in 1700, but its publication was overshadowed by the simultaneous release of Motteux's translation.[78] In 1742, theCharles Jervastranslation appeared, posthumously. Through a printer's error, it came to be known, and is still known, as "the Jarvis translation". It was the most scholarly and accurate English translation of the novel up to that time, but future translator John Ormsby points out in his own introduction to the novel that the Jarvis translation has been criticized as being too stiff. Nevertheless, it became the most frequently reprinted translation of the novel until about 1885. Another 18th-century translation into English was that ofTobias Smollett, himself a novelist, first published in 1755. Like the Jarvis translation, it continues to be reprinted today. A translation by Alexander James Duffield appeared in 1881 and another by Henry Edward Watts in 1888. Most modern translators take as their model the 1885 translation by John Ormsby.[82] An expurgated children's version, under the titleThe Story of Don Quixote, was published in 1922 (available onProject Gutenberg). It leaves out the risqué sections as well as chapters that young readers might consider dull, and embellishes a great deal on Cervantes' original text. The title page actually gives credit to the two editors as if they were the authors, and omits any mention of Cervantes.[83] The most widely read English-language translations of the mid-20th century are bySamuel Putnam(1949),J. M. Cohen(1950;Penguin Classics), andWalter Starkie(1957). The last English translation of the novel in the 20th century was byBurton Raffel, published in 1996. The 21st century has already seen five new translations of the novel into English. The first is byJohn D. Rutherfordand the second byEdith Grossman. Reviewing the novel inThe New York Times,Carlos Fuentescalled Grossman's translation a "major literary achievement"[84]and another called it the "most transparent and least impeded among more than a dozen English translations going back to the 17th century."[85] In 2005, the year of the novel's 400th anniversary, Tom Lathrop published a new English translation of the novel, based on a lifetime of specialized study of the novel and its history.[86]The fourth translation of the 21st century was released in 2006 by former university librarian James H. Montgomery, 26 years after he had begun it, in an attempt to "recreate the sense of the original as closely as possible, though not at the expense of Cervantes' literary style."[87] In 2011, another translation by Gerald J. Davis appeared, which is self-published via Lulu.com.[88]The latest and the sixth translation of the 21st century is Diana de Armas Wilson's 2020 revision ofBurton Raffel's translation. Reviewing 26 out of the current 28 English translations as a whole in 2008, Daniel Eisenberg stated that there is no one translation ideal for every purpose but expressed a preference for those of Putnam and the revision of Ormsby's translation by Douglas and Jones.[89]
https://en.wikipedia.org/wiki/The_proof_of_the_pudding
Multiprocessing(MP) is the use of two or morecentral processing units(CPUs) within a singlecomputer system.[1][2]The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple coreson onedie, multiple dies in onepackage, multiple packages in onesystem unit, etc.). Amultiprocessoris a computer system having two or moreprocessing units(multiple processors) each sharingmain memoryand peripherals, in order to simultaneously process programs.[3][4]A 2009 textbook defined multiprocessor system similarly, but noted that the processors may share "some or all of the system’s memory and I/O facilities"; it also gavetightly coupled systemas a synonymous term.[5] At theoperating systemlevel,multiprocessingis sometimes used to refer to the execution of multiple concurrentprocessesin a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant.[6][7]When used with this definition, multiprocessing is sometimes contrasted withmultitasking, which may use just a single processor but switch it in time slices between tasks (i.e. atime-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor.[7]Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the termparallel processingis generally used to denote that scenario.[6]Other authors prefer to refer to the operating system techniques asmultiprogrammingand reserve the termmultiprocessingfor the hardware aspect of having more than one processor.[2][8]The remainder of this article discusses multiprocessing only in this hardware sense. InFlynn's taxonomy, multiprocessors as defined above areMIMDmachines.[9][10]As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also containsmessage passingmulticomputer systems.[9] In amultiprocessingsystem, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware andoperating systemsoftware design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized. Systems that treat all CPUs equally are calledsymmetric multiprocessing(SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, includingasymmetric multiprocessing(ASMP),non-uniform memory access(NUMA) multiprocessing, andclusteredmultiprocessing. In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can share a common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another. Two early examples of a mainframe master/slave multiprocessor are theBull Gamma 60and theBurroughs B5000.[11] An early example of a master/slave multiprocessor system of microprocessors is the Tandy/Radio ShackTRS-80 Model 16desktop computer which came out in February 1982 and ran the multi-user/multi-taskingXenixoperating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has two microprocessors: an 8-bitZilog Z80CPU running at 4 MHz, and a 16-bitMotorola 68000CPU running at 6 MHz. When the system is booted, the Z-80 is the master and the Xenix boot process initializes the slave 68000, and then transfers control to the 68000, whereupon the CPUs change roles and the Z-80 becomes a slave processor responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications run on the 68000 CPU. The Z-80 can be used to do other tasks. The earlierTRS-80 Model II, which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021[12]microcontroller in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microcontroller, both attributes that would later be copied years later by Apple and IBM. In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single instruction, multiple dataor SIMD, often used invector processing), multiple sequences of instructions in a single context (multiple instruction, single dataor MISD, used forredundancyin fail-safe systems and sometimes applied to describepipelined processorsorhyper-threading), or multiple sequences of instructions in multiple contexts (multiple instruction, multiple dataor MIMD). Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP orUMA), or may participate in a memory hierarchy with both local and shared memory (SM)(NUMA). TheIBM p690Regatta is an example of a high end SMP system.IntelXeonprocessors dominated the multiprocessor market for business PCs and were the only major x86 option until the release ofAMD'sOpteronrange of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the systemRAM. Chip multiprocessors, also known asmulti-corecomputing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled. Loosely coupled multiprocessor systems (often referred to asclusters) are based on multiple standalone relatively low processor countcommodity computersinterconnected via a high speed communication system (Gigabit Ethernetis common). A LinuxBeowulf clusteris an example of aloosely coupledsystem. Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and maydepreciaterapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster. Power consumption is also a consideration. Tightly coupled systems tend to be much more energy-efficient than clusters. This is because a considerable reduction in power consumption can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems. Loosely coupled systems have the ability to run different operating systems or OS versions on different systems. Merging data from multiplethreadsorprocessesmay incur significant overhead due toconflict resolution,data consistency, versioning, and synchronization.[13]
https://en.wikipedia.org/wiki/Multiprocessing#Symmetric_multiprocessing
Instatisticsandeconometrics, adistributed lag modelis a model fortime seriesdata in which aregressionequation is used to predict current values of adependent variablebased on both the current values of anexplanatory variableand the lagged (past period) values of this explanatory variable.[1][2] The starting point for a distributed lag model is an assumed structure of the form or the form whereytis the value at time periodtof the dependent variabley,ais the intercept term to be estimated, andwiis called the lag weight (also to be estimated) placed on the valueiperiods previously of the explanatory variablex. In the first equation, the dependent variable is assumed to be affected by values of the independent variable arbitrarily far in the past, so the number of lag weights is infinite and the model is called aninfinite distributed lag model. In the alternative, second, equation, there are only a finite number of lag weights, indicating an assumption that there is a maximum lag beyond which values of the independent variable do not affect the dependent variable; a model based on this assumption is called afinite distributed lag model. In an infinite distributed lag model, an infinite number of lag weights need to be estimated; clearly this can be done only if some structure is assumed for the relation between the various lag weights, with the entire infinitude of them expressible in terms of a finite number of assumed underlying parameters. In a finite distributed lag model, the parameters could be directly estimated byordinary least squares(assuming the number of data points sufficiently exceeds the number of lag weights); nevertheless, such estimation may give very imprecise results due to extrememulticollinearityamong the various lagged values of the independent variable, so again it may be necessary to assume some structure for the relation between the various lag weights. The concept of distributed lag models easily generalizes to the context of more than one right-side explanatory variable. The simplest way to estimate parameters associated with distributed lags is byordinary least squares, assuming a fixed maximum lagp{\displaystyle p}, assumingindependently and identically distributederrors, and imposing no structure on the relationship of the coefficients of the lagged explanators with each other. However,multicollinearityamong the lagged explanators often arises, leading to high variance of the coefficient estimates. Structured distributed lag models come in two types: finite and infinite.Infinite distributed lagsallow the value of the independent variable at a particular time to influence the dependent variable infinitely far into the future, or to put it another way, they allow the current value of the dependent variable to be influenced by values of the independent variable that occurred infinitely long ago; but beyond some lag length the effects taper off toward zero.Finite distributed lagsallow for the independent variable at a particular time to influence the dependent variable for only a finite number of periods. The most important structured finite distributed lag model is theAlmonlag model.[3]This model allows the data to determine the shape of the lag structure, but the researcher must specify the maximum lag length; an incorrectly specified maximum lag length can distort the shape of the estimated lag structure as well as the cumulative effect of the independent variable. The Almon lag assumes thatk+ 1lag weights are related ton+ 1linearly estimable underlying parameters(n < k)ajaccording to fori=0,…,k.{\displaystyle i=0,\dots ,k.} The most common type of structured infinite distributed lag model is thegeometric lag, also known as theKoyck lag. In this lag structure, the weights (magnitudes of influence) of the lagged independent variable values decline exponentially with the length of the lag; while the shape of the lag structure is thus fully imposed by the choice of this technique, the rate of decline as well as the overall magnitude of effect are determined by the data. Specification of the regression equation is very straightforward: one includes as explanators (right-hand side variables in the regression) the one-period-lagged value of the dependent variable and the current value of the independent variable: where0≤λ<1{\displaystyle 0\leq \lambda <1}. In this model, the short-run (same-period) effect of a unit change in the independent variable is the value ofb, while the long-run (cumulative) effect of a sustained unit change in the independent variable can be shown to be Other infinite distributed lag models have been proposed to allow the data to determine the shape of the lag structure. Thepolynomial inverse lag[4][5]assumes that the lag weights are related to underlying, linearly estimable parametersajaccording to fori=0,…,∞.{\displaystyle i=0,\dots ,\infty .} Thegeometric combination lag[6]assumes that the lags weights are related to underlying, linearly estimable parametersajaccording to either fori=0,…,∞{\displaystyle i=0,\dots ,\infty }or fori=0,…,∞.{\displaystyle i=0,\dots ,\infty .} Thegamma lag[7]and therational lag[8]are other infinite distributed lag structures. Distributed lag models were introduced into health-related studies in 2000 by Schwartz[9]and 2002 by Zanobetti and Schwartz.[10]The Bayesian version of the model was suggested by Welty in 2007.[11]Gasparrini introduced more flexible statistical models in 2010[12]that are capable of describing additional time dimensions of the exposure-response relationship, and developed a family of distributed lag non-linear models (DLNM), a modeling framework that can simultaneously represent non-linear exposure-response dependencies and delayed effects.[13] The distributed lag model concept was first to applied tolongitudinal cohortresearch by Hsu in 2015,[14]studying the relationship betweenPM2.5and childasthma, and more complicated distributed lag method aimed to accommodatelongitudinal cohortresearch analysis such as Bayesian Distributed Lag Interaction Model[15]by Wilson have been subsequently developed to answer similar research questions.
https://en.wikipedia.org/wiki/Distributed_lag
Inmathematics, anorthogonal polynomial sequenceis a family ofpolynomialssuch that any two different polynomials in the sequence areorthogonalto each other under someinner product. The most widely used orthogonal polynomials are theclassical orthogonal polynomials, consisting of theHermite polynomials, theLaguerre polynomialsand theJacobi polynomials. TheGegenbauer polynomialsform the most important class of Jacobi polynomials; they include theChebyshev polynomials, and theLegendre polynomialsas special cases. These are frequently given by theRodrigues' formula. The field of orthogonal polynomials developed in the late 19th century from a study ofcontinued fractionsbyP. L. Chebyshevand was pursued byA. A. MarkovandT. J. Stieltjes. They appear in a wide variety of fields:numerical analysis(quadrature rules),probability theory,representation theory(ofLie groups,quantum groups, and related objects),enumerative combinatorics,algebraic combinatorics,mathematical physics(the theory ofrandom matrices,integrable systems, etc.), andnumber theory. Some of the mathematicians who have worked on orthogonal polynomials includeGábor Szegő,Sergei Bernstein,Naum Akhiezer,Arthur Erdélyi,Yakov Geronimus,Wolfgang Hahn,Theodore Seio Chihara,Mourad Ismail,Waleed Al-Salam,Richard Askey, andRehuel Lobatto. Given any non-decreasing functionαon the real numbers, we can define theLebesgue–Stieltjes integral∫f(x)dα(x){\displaystyle \int f(x)\,d\alpha (x)}of a functionf. If this integral is finite for all polynomialsf, we can define an inner product on pairs of polynomialsfandgby⟨f,g⟩=∫f(x)g(x)dα(x).{\displaystyle \langle f,g\rangle =\int f(x)g(x)\,d\alpha (x).} This operation is a positive semidefiniteinner producton thevector spaceof all polynomials, and is positive definite if the function α has an infinite number of points of growth. It induces a notion oforthogonalityin the usual way, namely that two polynomials are orthogonal if their inner product is zero. Then the sequence(Pn)∞n=0of orthogonal polynomials is defined by the relationsdeg⁡Pn=n,⟨Pm,Pn⟩=0form≠n.{\displaystyle \deg P_{n}=n~,\quad \langle P_{m},\,P_{n}\rangle =0\quad {\text{for}}\quad m\neq n~.} In other words, the sequence is obtained from the sequence of monomials 1,x,x2, … by theGram–Schmidt processwith respect to this inner product. Usually the sequence is required to beorthonormal, namely,⟨Pn,Pn⟩=1,{\displaystyle \langle P_{n},P_{n}\rangle =1,}however, other normalisations are sometimes used. Sometimes we havedα(x)=W(x)dx{\displaystyle d\alpha (x)=W(x)\,dx}whereW:[x1,x2]→R{\displaystyle W:[x_{1},x_{2}]\to \mathbb {R} }is a non-negative function with support on some interval[x1,x2]in the real line (wherex1= −∞andx2= ∞are allowed). Such aWis called aweight function.[1]Then the inner product is given by⟨f,g⟩=∫x1x2f(x)g(x)W(x)dx.{\displaystyle \langle f,g\rangle =\int _{x_{1}}^{x_{2}}f(x)g(x)W(x)\,dx.}However, there are many examples of orthogonal polynomials where the measuredα(x)has points with non-zero measure where the functionαis discontinuous, so cannot be given by a weight functionWas above. The most commonly used orthogonal polynomials are orthogonal for a measure with support in a real interval. This includes: Discrete orthogonal polynomialsare orthogonal with respect to some discrete measure. Sometimes the measure has finite support, in which case the family of orthogonal polynomials is finite, rather than an infinite sequence. TheRacah polynomialsare examples of discrete orthogonal polynomials, and include as special cases theHahn polynomialsanddual Hahn polynomials, which in turn include as special cases theMeixner polynomials,Krawtchouk polynomials, andCharlier polynomials. Meixner classified all the orthogonalSheffer sequences: there are only Hermite, Laguerre, Charlier, Meixner, and Meixner–Pollaczek. In some sense Krawtchouk should be on this list too, but they are a finite sequence. These six families correspond to theNEF-QVFsand aremartingalepolynomials for certainLévy processes. Sieved orthogonal polynomials, such as thesieved ultraspherical polynomials,sieved Jacobi polynomials, andsieved Pollaczek polynomials, have modified recurrence relations. One can also consider orthogonal polynomials for some curve in thecomplex plane. The most important case (other than real intervals) is when the curve is the unit circle, givingorthogonal polynomials on the unit circle, such as theRogers–Szegő polynomials. There are some families of orthogonal polynomials that are orthogonal on plane regions such as triangles or disks. They can sometimes be written in terms of Jacobi polynomials. For example,Zernike polynomialsare orthogonal on theunit disk. The advantage of orthogonality between different orders ofHermite polynomialsis applied to Generalized frequency division multiplexing (GFDM) structure. More than one symbol can be carried in each grid of time-frequency lattice.[2] Orthogonal polynomials of one variable defined by a non-negative measure on the real line have the following properties. The orthogonal polynomialsPncan be expressed in terms of themoments as follows: where the constantscnare arbitrary (depend on the normalization ofPn). This comes directly from applying the Gram–Schmidt process to the monomials, imposing each polynomial to be orthogonal with respect to the previous ones. For example, orthogonality withP0{\displaystyle P_{0}}prescribes thatP1{\displaystyle P_{1}}must have the formP1(x)=c1(x−⟨P0,x⟩P0⟨P0,P0⟩)=c1(x−m1),{\displaystyle P_{1}(x)=c_{1}\left(x-{\frac {\langle P_{0},x\rangle P_{0}}{\langle P_{0},P_{0}\rangle }}\right)=c_{1}(x-m_{1}),}which can be seen to be consistent with the previously given expression with the determinant. The polynomialsPnsatisfy a recurrence relation of the form whereAnis not 0. The converse is also true; seeFavard's theorem. If the measure dαis supported on an interval [a,b], all the zeros ofPnlie in [a,b]. Moreover, the zeros have the following interlacing property: ifm<n, there is a zero ofPnbetween any two zeros ofPm.Electrostaticinterpretations of the zeros can be given.[citation needed] From the 1980s, with the work of X. G. Viennot, J. Labelle, Y.-N. Yeh, D. Foata, and others, combinatorial interpretations were found for all the classical orthogonal polynomials.[3] TheMacdonald polynomialsare orthogonal polynomials in several variables, depending on the choice of an affine root system. They include many other families of multivariable orthogonal polynomials as special cases, including theJack polynomials, theHall–Littlewood polynomials, theHeckman–Opdam polynomials, and theKoornwinder polynomials. TheAskey–Wilson polynomialsare the special case of Macdonald polynomials for a certain non-reduced root system of rank 1. Multiple orthogonal polynomials are polynomials in one variable that are orthogonal with respect to a finite family of measures. These are orthogonal polynomials with respect to aSobolevinner product, i.e. an inner product with derivatives. Including derivatives has big consequences for the polynomials, in general they no longer share some of the nice features of the classical orthogonal polynomials. Orthogonal polynomials with matrices have either coefficients that are matrices or the indeterminate is a matrix. There are two popular examples: either the coefficients{ai}{\displaystyle \{a_{i}\}}are matrices orx{\displaystyle x}: Quantum polynomials or q-polynomials are theq-analogsof orthogonal polynomials.
https://en.wikipedia.org/wiki/Orthogonal_polynomials
In physics, aLangevin equation(named afterPaul Langevin) is astochastic differential equationdescribing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is toBrownian motion, which models the fluctuating motion of a small particle in a fluid. The original Langevin equation[1][2]describesBrownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid,mdvdt=−λv+η(t).{\displaystyle m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}=-\lambda \mathbf {v} +{\boldsymbol {\eta }}\left(t\right).} Here,v{\displaystyle \mathbf {v} }is the velocity of the particle,λ{\displaystyle \lambda }is its damping coefficient, andm{\displaystyle m}is its mass. The force acting on the particle is written as a sum of a viscous force proportional to the particle's velocity (Stokes' law), and anoise termη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}representing the effect of the collisions with the molecules of the fluid. The forceη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}has aGaussian probability distributionwith correlation function⟨ηi(t)ηj(t′)⟩=2λkBTδi,jδ(t−t′),{\displaystyle \left\langle \eta _{i}\left(t\right)\eta _{j}\left(t'\right)\right\rangle =2\lambda k_{\text{B}}T\delta _{i,j}\delta \left(t-t'\right),}wherekB{\displaystyle k_{\text{B}}}is theBoltzmann constant,T{\displaystyle T}is the temperature andηi(t){\displaystyle \eta _{i}\left(t\right)}is the i-th component of the vectorη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}. Theδ{\displaystyle \delta }-functionform of the time correlation means that the force at a timet{\displaystyle t}is uncorrelated with the force at any other time. This is an approximation: the actual random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a "macroscopic" particle at a much longer time scale, and in this limit theδ{\displaystyle \delta }-correlation and the Langevin equation becomes virtually exact. Another common feature of the Langevin equation is the occurrence of the damping coefficientλ{\displaystyle \lambda }in the correlation function of the random force, which in an equilibrium system is an expression of theEinstein relation. A strictlyδ{\displaystyle \delta }-correlated fluctuating forceη(t){\displaystyle {\boldsymbol {\eta }}\left(t\right)}is not a function in the usual mathematical sense and even the derivativedv/dt{\displaystyle \mathrm {d} \mathbf {v} /\mathrm {d} t}is not defined in this limit. This problem disappears when the Langevin equation is written in integral formmv=∫t(−λv+η(t))dt.{\displaystyle m\mathbf {v} =\int ^{t}\left(-\lambda \mathbf {v} +{\boldsymbol {\eta }}\left(t\right)\right)\mathrm {d} t.} Therefore, the differential form is only an abbreviation for its time integral. The general mathematical term for equations of this type is "stochastic differential equation".[3] Another mathematical ambiguity occurs for Langevin equations with multiplicative noise, which refers to noise terms that are multiplied by a non-constant function of the dependent variables, e.g.,|v(t)|η(t){\displaystyle \left|{\boldsymbol {v}}(t)\right|{\boldsymbol {\eta }}(t)}. If a multiplicative noise is intrinsic to the system, its definition is ambiguous, as it is equally valid to interpret it according to Stratonovich- or Ito- scheme (seeItô calculus). Nevertheless, physical observables are independent of the interpretation, provided the latter is applied consistently when manipulating the equation. This is necessary because the symbolic rules of calculus differ depending on the interpretation scheme. If the noise is external to the system, the appropriate interpretation is the Stratonovich one.[4][5] There is a formal derivation of a generic Langevin equation from classical mechanics.[6][7]This generic equation plays a central role in the theory ofcritical dynamics,[8]and other areas of nonequilibrium statistical mechanics. The equation for Brownian motion above is a special case. An essential step in the derivation is the division of the degrees of freedom into the categoriesslowandfast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Thus, densities of conserved quantities, and in particular their long wavelength components, are slow variable candidates. This division can be expressed formally with theZwanzig projection operator.[9]Nevertheless, the derivation is not completely rigorous from a mathematical physics perspective because it relies on assumptions that lack rigorous proof, and instead are justified only as plausible approximations of physical systems. LetA={Ai}{\displaystyle A=\{A_{i}\}}denote the slow variables. The generic Langevin equation then readsdAidt=kBT∑j[Ai,Aj]dHdAj−∑jλi,j(A)dHdAj+∑jdλi,j(A)dAj+ηi(t).{\displaystyle {\frac {\mathrm {d} A_{i}}{\mathrm {d} t}}=k_{\text{B}}T\sum \limits _{j}{\left[{A_{i},A_{j}}\right]{\frac {{\mathrm {d} }{\mathcal {H}}}{\mathrm {d} A_{j}}}}-\sum \limits _{j}{\lambda _{i,j}\left(A\right){\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}+}\sum \limits _{j}{\frac {\mathrm {d} {\lambda _{i,j}\left(A\right)}}{\mathrm {d} A_{j}}}+\eta _{i}\left(t\right).} The fluctuating forceηi(t){\displaystyle \eta _{i}\left(t\right)}obeys aGaussian probability distributionwith correlation function⟨ηi(t)ηj(t′)⟩=2λi,j(A)δ(t−t′).{\displaystyle \left\langle {\eta _{i}\left(t\right)\eta _{j}\left(t'\right)}\right\rangle =2\lambda _{i,j}\left(A\right)\delta \left(t-t'\right).} This implies theOnsager reciprocity relationλi,j=λj,i{\displaystyle \lambda _{i,j}=\lambda _{j,i}}for the damping coefficientsλ{\displaystyle \lambda }. The dependencedλi,j/dAj{\displaystyle \mathrm {d} \lambda _{i,j}/\mathrm {d} A_{j}}ofλ{\displaystyle \lambda }onA{\displaystyle A}is negligible in most cases. The symbolH=−ln⁡(p0){\displaystyle {\mathcal {H}}=-\ln \left(p_{0}\right)}denotes theHamiltonianof the system, wherep0(A){\displaystyle p_{0}\left(A\right)}is the equilibrium probability distribution of the variablesA{\displaystyle A}. Finally,[Ai,Aj]{\displaystyle [A_{i},A_{j}]}is the projection of thePoisson bracketof the slow variablesAi{\displaystyle A_{i}}andAj{\displaystyle A_{j}}onto the space of slow variables. In the Brownian motion case one would haveH=p2/(2mkBT){\displaystyle {\mathcal {H}}=\mathbf {p} ^{2}/\left(2mk_{\text{B}}T\right)},A={p}{\displaystyle A=\{\mathbf {p} \}}orA={x,p}{\displaystyle A=\{\mathbf {x} ,\mathbf {p} \}}and[xi,pj]=δi,j{\displaystyle [x_{i},p_{j}]=\delta _{i,j}}. The equation of motiondx/dt=p/m{\displaystyle \mathrm {d} \mathbf {x} /\mathrm {d} t=\mathbf {p} /m}forx{\displaystyle \mathbf {x} }is exact: there is no fluctuating forceηx{\displaystyle \eta _{x}}and no damping coefficientλx,p{\displaystyle \lambda _{x,p}}. There is a close analogy between the paradigmatic Brownian particle discussed above andJohnson noise, the electric voltage generated by thermal fluctuations in a resistor.[10]The diagram at the right shows an electric circuit consisting of aresistanceRand acapacitanceC. The slow variable is the voltageUbetween the ends of the resistor. The Hamiltonian readsH=E/kBT=CU2/(2kBT){\displaystyle {\mathcal {H}}=E/k_{\text{B}}T=CU^{2}/(2k_{\text{B}}T)}, and the Langevin equation becomesdUdt=−URC+η(t),⟨η(t)η(t′)⟩=2kBTRC2δ(t−t′).{\displaystyle {\frac {\mathrm {d} U}{\mathrm {d} t}}=-{\frac {U}{RC}}+\eta \left(t\right),\;\;\left\langle \eta \left(t\right)\eta \left(t'\right)\right\rangle ={\frac {2k_{\text{B}}T}{RC^{2}}}\delta \left(t-t'\right).} This equation may be used to determine the correlation function⟨U(t)U(t′)⟩=kBTCexp⁡(−|t−t′|RC)≈2RkBTδ(t−t′),{\displaystyle \left\langle U\left(t\right)U\left(t'\right)\right\rangle ={\frac {k_{\text{B}}T}{C}}\exp \left(-{\frac {\left|t-t'\right|}{RC}}\right)\approx 2Rk_{\text{B}}T\delta \left(t-t'\right),}which becomes white noise (Johnson noise) when the capacitanceCbecomes negligibly small. The dynamics of theorder parameterφ{\displaystyle \varphi }of a second order phase transition slows down near thecritical pointand can be described with a Langevin equation.[8]The simplest case is theuniversality class"model A" with a non-conserved scalar order parameter, realized for instance in axial ferromagnets,∂∂tφ(x,t)=−λδHδφ+η(x,t),H=∫ddx[12r0φ2+uφ4+12(∇φ)2],⟨η(x,t)η(x′,t′)⟩=2λδ(x−x′)δ(t−t′).{\displaystyle {\begin{aligned}{\frac {\partial }{\partial t}}\varphi {\left(\mathbf {x} ,t\right)}&=-\lambda {\frac {\delta {\mathcal {H}}}{\delta \varphi }}+\eta {\left(\mathbf {x} ,t\right)},\\[2ex]{\mathcal {H}}&=\int d^{d}x\left[{\frac {1}{2}}r_{0}\varphi ^{2}+u\varphi ^{4}+{\frac {1}{2}}\left(\nabla \varphi \right)^{2}\right],\\[2ex]\left\langle \eta {\left(\mathbf {x} ,t\right)}\,\eta {\left(\mathbf {x} ',t'\right)}\right\rangle &=2\lambda \,\delta {\left(\mathbf {x} -\mathbf {x} '\right)}\;\delta {\left(t-t'\right)}.\end{aligned}}}Other universality classes (the nomenclature is "model A",..., "model J") contain a diffusing order parameter, order parameters with several components, other critical variables and/or contributions from Poisson brackets.[8] mdvdt=−λv+η(t)−kx{\displaystyle m{\frac {dv}{dt}}=-\lambda v+\eta (t)-kx} A particle in a fluid is described by a Langevin equation with a potential energy function, a damping force, and thermal fluctuations given by thefluctuation dissipation theorem. If the potential is quadratic then the constant energy curves are ellipses, as shown in the figure. If there is dissipation but no thermal noise, a particle continually loses energy to the environment, and its time-dependent phase portrait (velocity vs position) corresponds to an inward spiral toward 0 velocity. By contrast, thermal fluctuations continually add energy to the particle and prevent it from reaching exactly 0 velocity. Rather, the initial ensemble of stochastic oscillators approaches a steady state in which the velocity and position are distributed according to theMaxwell–Boltzmann distribution. In the plot below (figure 2), the long time velocity distribution (blue) and position distributions (orange) in a harmonic potential (U=12kx2{\textstyle U={\frac {1}{2}}kx^{2}}) is plotted with the Boltzmann probabilities for velocity (green) and position (red). In particular, the late time behavior depicts thermal equilibrium. Consider a free particle of massm{\displaystyle m}with equation of motion described bymdvdt=−vμ+η(t),{\displaystyle m{\frac {d\mathbf {v} }{dt}}=-{\frac {\mathbf {v} }{\mu }}+{\boldsymbol {\eta }}(t),}wherev=dr/dt{\displaystyle \mathbf {v} =d\mathbf {r} /dt}is the particle velocity,μ{\displaystyle \mu }is the particle mobility, andη(t)=ma(t){\displaystyle {\boldsymbol {\eta }}(t)=m\mathbf {a} (t)}is a rapidly fluctuating force whose time-average vanishes over a characteristic timescaletc{\displaystyle t_{c}}of particle collisions, i.e.η(t)¯=0{\displaystyle {\overline {{\boldsymbol {\eta }}(t)}}=0}. The general solution to the equation of motion isv(t)=v(0)e−t/τ+∫0ta(t′)e−(t−t′)/τdt′,{\displaystyle \mathbf {v} (t)=\mathbf {v} (0)e^{-t/\tau }+\int _{0}^{t}\mathbf {a} (t')e^{-(t-t')/\tau }dt',}whereτ=mμ{\displaystyle \tau =m\mu }is the correlation time of the noise term. It can also be shown that theautocorrelation functionof the particle velocityv{\displaystyle \mathbf {v} }is given by[11]Rvv(t1,t2)≡⟨v(t1)⋅v(t2)⟩=v2(0)e−(t1+t2)/τ+∫0t1∫0t2Raa(t1′,t2′)e−(t1+t2−t1′−t2′)/τdt1′dt2′≃v2(0)e−|t2−t1|/τ+[3kBTm−v2(0)][e−|t2−t1|/τ−e−(t1+t2)/τ],{\displaystyle {\begin{aligned}R_{vv}(t_{1},t_{2})&\equiv \langle \mathbf {v} (t_{1})\cdot \mathbf {v} (t_{2})\rangle \\&=v^{2}(0)e^{-(t_{1}+t_{2})/\tau }+\int _{0}^{t_{1}}\int _{0}^{t_{2}}R_{aa}(t_{1}',t_{2}')e^{-(t_{1}+t_{2}-t_{1}'-t_{2}')/\tau }dt_{1}'dt_{2}'\\&\simeq v^{2}(0)e^{-|t_{2}-t_{1}|/\tau }+\left[{\frac {3k_{\text{B}}T}{m}}-v^{2}(0)\right]{\Big [}e^{-|t_{2}-t_{1}|/\tau }-e^{-(t_{1}+t_{2})/\tau }{\Big ]},\end{aligned}}}where we have used the property that the variablesa(t1′){\displaystyle \mathbf {a} (t_{1}')}anda(t2′){\displaystyle \mathbf {a} (t_{2}')}become uncorrelated for time separationst2′−t1′≫tc{\displaystyle t_{2}'-t_{1}'\gg t_{c}}. Besides, the value oflimt→∞⟨v2(t)⟩=limt→∞Rvv(t,t){\textstyle \lim _{t\to \infty }\langle v^{2}(t)\rangle =\lim _{t\to \infty }R_{vv}(t,t)}is set to be equal to3kBT/m{\displaystyle 3k_{\text{B}}T/m}such that it obeys theequipartition theorem. If the system is initially at thermal equilibrium already withv2(0)=3kBT/m{\displaystyle v^{2}(0)=3k_{\text{B}}T/m}, then⟨v2(t)⟩=3kBT/m{\displaystyle \langle v^{2}(t)\rangle =3k_{\text{B}}T/m}for allt{\displaystyle t}, meaning that the system remains at equilibrium at all times. The velocityv(t){\displaystyle \mathbf {v} (t)}of the Brownian particle can be integrated to yield its trajectoryr(t){\displaystyle \mathbf {r} (t)}. If it is initially located at the origin with probability 1, then the result isr(t)=v(0)τ(1−e−t/τ)+τ∫0ta(t′)[1−e−(t−t′)/τ]dt′.{\displaystyle \mathbf {r} (t)=\mathbf {v} (0)\tau \left(1-e^{-t/\tau }\right)+\tau \int _{0}^{t}\mathbf {a} (t')\left[1-e^{-(t-t')/\tau }\right]dt'.} Hence, the average displacement⟨r(t)⟩=v(0)τ(1−e−t/τ){\textstyle \langle \mathbf {r} (t)\rangle =\mathbf {v} (0)\tau \left(1-e^{-t/\tau }\right)}asymptotes tov(0)τ{\displaystyle \mathbf {v} (0)\tau }as the system relaxes. Themean squared displacementcan be determined similarly:⟨r2(t)⟩=v2(0)τ2(1−e−t/τ)2−3kBTmτ2(1−e−t/τ)(3−e−t/τ)+6kBTmτt.{\displaystyle \langle r^{2}(t)\rangle =v^{2}(0)\tau ^{2}\left(1-e^{-t/\tau }\right)^{2}-{\frac {3k_{\text{B}}T}{m}}\tau ^{2}\left(1-e^{-t/\tau }\right)\left(3-e^{-t/\tau }\right)+{\frac {6k_{\text{B}}T}{m}}\tau t.} This expression implies that⟨r2(t≪τ)⟩≃v2(0)t2{\displaystyle \langle r^{2}(t\ll \tau )\rangle \simeq v^{2}(0)t^{2}}, indicating that the motion of Brownian particles at timescales much shorter than the relaxation timeτ{\displaystyle \tau }of the system is (approximately)time-reversalinvariant. On the other hand,⟨r2(t≫τ)⟩≃6kBTτt/m=6μkBTt=6Dt{\displaystyle \langle r^{2}(t\gg \tau )\rangle \simeq 6k_{\text{B}}T\tau t/m=6\mu k_{\text{B}}Tt=6Dt}, which indicates anirreversible,dissipative process. If the external potential is conservative and the noise term derives from a reservoir in thermal equilibrium, then the long-time solution to the Langevin equation must reduce to theBoltzmann distribution, which is the probability distribution function for particles in thermal equilibrium. In the special case ofoverdampeddynamics, the inertia of the particle is negligible in comparison to the damping force, and the trajectoryx(t){\displaystyle x(t)}is described by the overdamped Langevin equationλdxdt=−∂V(x)∂x+η(t)≡−∂V(x)∂x+2λkBTdBtdt,{\displaystyle \lambda {\frac {dx}{dt}}=-{\frac {\partial V(x)}{\partial x}}+\eta (t)\equiv -{\frac {\partial V(x)}{\partial x}}+{\sqrt {2\lambda k_{\text{B}}T}}{\frac {dB_{t}}{dt}},}whereλ{\displaystyle \lambda }is the damping constant. The termη(t){\displaystyle \eta (t)}is white noise, characterized by⟨η(t)η(t′)⟩=2kBTλδ(t−t′){\displaystyle \left\langle \eta (t)\eta (t')\right\rangle =2k_{\text{B}}T\lambda \delta (t-t')}(formally, theWiener process). One way to solve this equation is to introduce a test functionf{\displaystyle f}and calculate its average. The average off(x(t)){\displaystyle f(x(t))}should be time-independent for finitex(t){\displaystyle x(t)}, leading toddt⟨f(x(t))⟩=0,{\displaystyle {\frac {d}{dt}}\left\langle f(x(t))\right\rangle =0,} Itô's lemma for theItô drift-diffusion processdXt=μtdt+σtdBt{\displaystyle dX_{t}=\mu _{t}\,dt+\sigma _{t}\,dB_{t}}says that the differential of a twice-differentiable functionf(t,x)is given bydf=(∂f∂t+μt∂f∂x+σt22∂2f∂x2)dt+σt∂f∂xdBt.{\displaystyle df=\left({\frac {\partial f}{\partial t}}+\mu _{t}{\frac {\partial f}{\partial x}}+{\frac {\sigma _{t}^{2}}{2}}{\frac {\partial ^{2}f}{\partial x^{2}}}\right)dt+\sigma _{t}{\frac {\partial f}{\partial x}}\,dB_{t}.} Applying this to the calculation of⟨f(x(t))⟩{\displaystyle \langle f(x(t))\rangle }gives⟨−f′(x)∂V∂x+kBTf″(x)⟩=0.{\displaystyle \left\langle -f'(x){\frac {\partial V}{\partial x}}+k_{\text{B}}Tf''(x)\right\rangle =0.} This average can be written using the probability density functionp(x){\displaystyle p(x)};∫(−f′(x)∂V∂xp(x)+kBTf″(x)p(x))dx=∫(−f′(x)∂V∂xp(x)−kBTf′(x)p′(x))dx=0{\displaystyle {\begin{aligned}&\int \left(-f'(x){\frac {\partial V}{\partial x}}p(x)+{k_{\text{B}}T}f''(x)p(x)\right)dx\\=&\int \left(-f'(x){\frac {\partial V}{\partial x}}p(x)-{k_{\text{B}}T}f'(x)p'(x)\right)dx\\=&\;0\end{aligned}}}where the second term was integrated by parts (hence the negative sign). Since this is true for arbitrary functionsf{\displaystyle f}, it follows that∂V∂xp(x)+kBTp′(x)=0,{\displaystyle {\frac {\partial V}{\partial x}}p(x)+{k_{\text{B}}T}p'(x)=0,}thus recovering the Boltzmann distributionp(x)∝exp⁡(−V(x)kBT).{\displaystyle p(x)\propto \exp \left({-{\frac {V(x)}{k_{\text{B}}T}}}\right).} In some situations, one is primarily interested in the noise-averaged behavior of the Langevin equation, as opposed to the solution for particular realizations of the noise. This section describes techniques for obtaining this averaged behavior that are distinct from—but also equivalent to—the stochastic calculus inherent in the Langevin equation. AFokker–Planck equationis a deterministic equation for the time dependent probability densityP(A,t){\displaystyle P\left(A,t\right)}of stochastic variablesA{\displaystyle A}. The Fokker–Planck equation corresponding to the generic Langevin equation described in this article is the following:[12]∂P(A,t)∂t=∑i,j∂∂Ai(−kBT[Ai,Aj]∂H∂Aj+λi,j∂H∂Aj+λi,j∂∂Aj)P(A,t).{\displaystyle {\frac {\partial P\left(A,t\right)}{\partial t}}=\sum _{i,j}{\frac {\partial }{\partial A_{i}}}\left(-k_{\text{B}}T\left[A_{i},A_{j}\right]{\frac {\partial {\mathcal {H}}}{\partial A_{j}}}+\lambda _{i,j}{\frac {\partial {\mathcal {H}}}{\partial A_{j}}}+\lambda _{i,j}{\frac {\partial }{\partial A_{j}}}\right)P\left(A,t\right).}The equilibrium distributionP(A)=p0(A)=const×exp⁡(−H){\displaystyle P(A)=p_{0}(A)={\text{const}}\times \exp(-{\mathcal {H}})}is a stationary solution. The Fokker–Planck equation for an underdamped Brownian particle is called theKlein–Kramers equation.[13][14]If the Langevin equations are written asr˙=pmp˙=−ξp−∇V(r)+2mξkBTη(t),⟨ηT(t)η(t′)⟩=Iδ(t−t′){\displaystyle {\begin{aligned}{\dot {\mathbf {r} }}&={\frac {\mathbf {p} }{m}}\\{\dot {\mathbf {p} }}&=-\xi \,\mathbf {p} -\nabla V(\mathbf {r} )+{\sqrt {2m\xi k_{\mathrm {B} }T}}{\boldsymbol {\eta }}(t),\qquad \langle {\boldsymbol {\eta }}^{\mathrm {T} }(t){\boldsymbol {\eta }}(t')\rangle =\mathbf {I} \delta (t-t')\end{aligned}}}wherep{\displaystyle \mathbf {p} }is the momentum, then the corresponding Fokker–Planck equation is∂f∂t+1mp⋅∇rf=ξ∇p⋅(pf)+∇p⋅(∇V(r)f)+mξkBT∇p2f{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {1}{m}}\mathbf {p} \cdot \nabla _{\mathbf {r} }f=\xi \nabla _{\mathbf {p} }\cdot \left(\mathbf {p} \,f\right)+\nabla _{\mathbf {p} }\cdot \left(\nabla V(\mathbf {r} )\,f\right)+m\xi k_{\mathrm {B} }T\,\nabla _{\mathbf {p} }^{2}f}Here∇r{\displaystyle \nabla _{\mathbf {r} }}and∇p{\displaystyle \nabla _{\mathbf {p} }}are thegradient operatorwith respect torandp, and∇p2{\displaystyle \nabla _{\mathbf {p} }^{2}}is theLaplacianwith respect top. Ind{\displaystyle d}-dimensional free space, corresponding toV(r)=constant{\displaystyle V(\mathbf {r} )={\text{constant}}}onRd{\displaystyle \mathbb {R} ^{d}}, this equation can be solved usingFourier transforms. If the particle is initialized att=0{\displaystyle t=0}with positionr′{\displaystyle \mathbf {r} '}and momentump′{\displaystyle \mathbf {p} '}, corresponding to initial conditionf(r,p,0)=δ(r−r′)δ(p−p′){\displaystyle f(\mathbf {r} ,\mathbf {p} ,0)=\delta (\mathbf {r} -\mathbf {r} ')\delta (\mathbf {p} -\mathbf {p} ')}, then the solution is[14][15]f(r,p,t)=1(2πσXσP1−β2)d×exp⁡[−12(1−β2)(|r−μX|2σX2+|p−μP|2σP2−2β(r−μX)⋅(p−μP)σXσP)]{\displaystyle {\begin{aligned}f(\mathbf {r} ,\mathbf {p} ,t)=&{\frac {1}{\left(2\pi \sigma _{X}\sigma _{P}{\sqrt {1-\beta ^{2}}}\right)^{d}}}\times \\&\quad \exp \left[-{\frac {1}{2(1-\beta ^{2})}}\left({\frac {|\mathbf {r} -{\boldsymbol {\mu }}_{X}|^{2}}{\sigma _{X}^{2}}}+{\frac {|\mathbf {p} -{\boldsymbol {\mu }}_{P}|^{2}}{\sigma _{P}^{2}}}-{\frac {2\beta (\mathbf {r} -{\boldsymbol {\mu }}_{X})\cdot (\mathbf {p} -{\boldsymbol {\mu }}_{P})}{\sigma _{X}\sigma _{P}}}\right)\right]\end{aligned}}}whereσX2=kBTmξ2[1+2ξt−(2−e−ξt)2];σP2=mkBT(1−e−2ξt)β=kBTξσXσP(1−e−ξt)2μX=r′+(mξ)−1(1−e−ξt)p′;μP=p′e−ξt.{\displaystyle {\begin{aligned}&\sigma _{X}^{2}={\frac {k_{\mathrm {B} }T}{m\xi ^{2}}}\left[1+2\xi t-\left(2-e^{-\xi t}\right)^{2}\right];\qquad \sigma _{P}^{2}=mk_{\mathrm {B} }T\left(1-e^{-2\xi t}\right)\\&\beta ={\frac {k_{\text{B}}T}{\xi \sigma _{X}\sigma _{P}}}\left(1-e^{-\xi t}\right)^{2}\\&{\boldsymbol {\mu }}_{X}=\mathbf {r} '+(m\xi )^{-1}\left(1-e^{-\xi t}\right)\mathbf {p} ';\qquad {\boldsymbol {\mu }}_{P}=\mathbf {p} 'e^{-\xi t}.\end{aligned}}}In three spatial dimensions, the mean squared displacement is⟨r(t)2⟩=∫f(r,p,t)r2drdp=μX2+3σX2{\displaystyle \langle \mathbf {r} (t)^{2}\rangle =\int f(\mathbf {r} ,\mathbf {p} ,t)\mathbf {r} ^{2}\,d\mathbf {r} d\mathbf {p} ={\boldsymbol {\mu }}_{X}^{2}+3\sigma _{X}^{2}} Apath integralequivalent to a Langevin equation can be obtained from the correspondingFokker–Planck equationor by transforming the Gaussian probability distributionP(η)(η)dη{\displaystyle P^{(\eta )}(\eta )\mathrm {d} \eta }of the fluctuating forceη{\displaystyle \eta }to a probability distribution of the slow variables, schematicallyP(A)dA=P(η)(η(A))det(dη/dA)dA{\displaystyle P(A)\mathrm {d} A=P^{(\eta )}(\eta (A))\det(\mathrm {d} \eta /\mathrm {d} A)\mathrm {d} A}. The functional determinant and associated mathematical subtleties drop out if the Langevin equation is discretized in the natural (causal) way, whereA(t+Δt)−A(t){\displaystyle A(t+\Delta t)-A(t)}depends onA(t){\displaystyle A(t)}but not onA(t+Δt){\displaystyle A(t+\Delta t)}. It turns out to be convenient to introduce auxiliaryresponse variablesA~{\displaystyle {\tilde {A}}}. The path integral equivalent to the generic Langevin equation then reads[16]∫P(A,A~)dAdA~=N∫exp⁡(L(A,A~))dAdA~,{\displaystyle \int P(A,{\tilde {A}})\,\mathrm {d} A\,\mathrm {d} {\tilde {A}}=N\int \exp \left(L(A,{\tilde {A}})\right)\mathrm {d} A\,\mathrm {d} {\tilde {A}},}whereN{\displaystyle N}is a normalization factor andL(A,A~)=∫∑i,j{A~iλi,jA~j−A~i{δi,jdAjdt−kBT[Ai,Aj]dHdAj+λi,jdHdAj−dλi,jdAj}}dt.{\displaystyle L(A,{\tilde {A}})=\int \sum _{i,j}\left\{{\tilde {A}}_{i}\lambda _{i,j}{\tilde {A}}_{j}-{\widetilde {A}}_{i}\left\{\delta _{i,j}{\frac {\mathrm {d} A_{j}}{\mathrm {d} t}}-k_{\text{B}}T\left[A_{i},A_{j}\right]{\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}+\lambda _{i,j}{\frac {\mathrm {d} {\mathcal {H}}}{\mathrm {d} A_{j}}}-{\frac {\mathrm {d} \lambda _{i,j}}{\mathrm {d} A_{j}}}\right\}\right\}\mathrm {d} t.}The path integral formulation allows for the use of tools fromquantum field theory, such as perturbation and renormalization group methods. This formulation is typically referred to as either the Martin-Siggia-Rose formalism[17]or the Janssen-De Dominicis[16][18]formalism after its developers. The mathematical formalism for this representation can be developed onabstract Wiener space.
https://en.wikipedia.org/wiki/Langevin_equation
Innumber theoryandcombinatorics, therankof aninteger partitionis a certain number associated with the partition. In fact at least two different definitions of rank appear in the literature. The first definition, with which most of this article is concerned, is that the rank of a partition is the number obtained by subtracting the number of parts in the partition from the largest part in the partition. The concept was introduced byFreeman Dysonin a paper published in the journalEureka.[1]It was presented in the context of a study of certaincongruenceproperties of thepartition functiondiscovered by the Indian mathematical geniusSrinivasa Ramanujan. A different concept, sharing the same name, is used in combinatorics, where the rank is taken to be the size of theDurfee squareof the partition. By apartitionof a positive integernwe mean a finite multiset λ = { λk, λk− 1, . . . , λ1} of positive integers satisfying the following two conditions: Ifλk, . . . ,λ2,λ1are distinct, that is, if then the partitionλis called astrict partitionofn. The integersλk, λk− 1, ...,λ1are thepartsof the partition. The number of parts in the partitionλiskand the largest part in the partition isλk. The rank of the partitionλ(whether ordinary or strict) is defined asλk−k.[1] The ranks of the partitions ofntake the following values and no others:[1] The following table gives the ranks of the various partitions of the number 5. Ranks of the partitions of the integer 5 The following notations are used to specify how many partitions have a given rank. Letn,qbe a positive integers andmbe any integer. For example, Letn,qbe a positive integers andmbe any integer.[1] Srinivasa Ramanujan in a paper published in 1919 proved the followingcongruencesinvolving the partition functionp(n):[2] In commenting on this result, Dyson noted that " . . . although we can prove that the partitions of 5n+ 4 can be divided into five equally numerous subclasses, it is unsatisfactory to receive from the proofs no concrete idea of how the division is to be made. We require a proof which will not appeal to generating functions, . . . ".[1]Dyson introduced the idea of rank of a partition to accomplish the task he set for himself. Using this new idea, he made the following conjectures: These conjectures were proved by Atkin and Swinnerton-Dyer in 1954.[3] The following tables show how the partitions of the integers 4 (5 ×n+ 4 withn= 0) and 9 (5 ×n+ 4 withn= 1 ) get divided into five equally numerous subclasses. Partitions of the integer 4 Partitions of the integer 9 In combinatorics, the phraserank of a partitionis sometimes used to describe a different concept: the rank of a partition λ is the largest integerisuch that λ has at leastiparts each of which is no smaller thani.[7]Equivalently, this is the length of the main diagonal in theYoung diagramorFerrers diagramfor λ, or the side-length of theDurfee squareof λ. The table of ranks (under this alternate definition) of partitions of 5 is given below. Ranks of the partitions of the integer 5
https://en.wikipedia.org/wiki/Rank_of_a_partition
Intheoretical linguisticsandcomputational linguistics,probabilistic context free grammars(PCFGs) extendcontext-free grammars, similar to howhidden Markov modelsextendregular grammars. Eachproductionis assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters viamachine learning. A probabilistic grammar's validity is constrained by context of its training dataset. PCFGs originated fromgrammar theory, and have application in areas as diverse asnatural language processingto the study the structure ofRNAmolecules and design ofprogramming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements. Derivation:The process of recursive generation of strings from a grammar. Parsing:Finding a valid derivation using an automaton. Parse Tree:The alignment of the grammar to a sequence. An example of a parser for PCFG grammars is thepushdown automaton. The algorithm parses grammar nonterminals from left to right in astack-likemanner. Thisbrute-forceapproach is not very efficient. In RNA secondary structure prediction variants of theCocke–Younger–Kasami (CYK) algorithmprovide more efficient alternatives to grammar parsing than pushdown automata.[1]Another example of a PCFG parser is the Stanford Statistical Parser which has been trained usingTreebank.[2] Similar to aCFG, a probabilistic context-free grammarGcan be defined by a quintuple: where PCFGs models extendcontext-free grammarsthe same way ashidden Markov modelsextendregular grammars. TheInside-Outside algorithmis an analogue of theForward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in modelparametrizationto estimate prior frequencies observed from training sequences in the case of RNAs. Dynamic programmingvariants of theCYK algorithmfind theViterbi parseof a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG. Context-free grammars are represented as a set of rules inspired from attempts to model natural languages.[3][4][5]The rules are absolute and have a typical syntax representation known asBackus–Naur form. The production rules consist of terminal{a,b}{\displaystyle \left\{a,b\right\}}and non-terminalSsymbols and a blankϵ{\displaystyle \epsilon }may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded.[1]An example of a grammar: This grammar can be shortened using the '|' ('or') character into: Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminalSthe emission can generate eitheraorborϵ{\displaystyle \epsilon }". Its derivation is: Ambiguous grammarmay result in ambiguous parsing if applied onhomographssince the same word sequence can have more than one interpretation.Pun sentencessuch as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses. One strategy of dealing with ambiguous parses (originating with grammarians as early asPāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated. Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered indiachronicshifts, these probabilistic rules can be re-learned, thus updating the grammar. Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential[6] Aweighted context-free grammar(WCFG) is a more general category ofcontext-free grammar, where each production has a numeric weight associated with it. The weight of a specificparse treein a WCFG is the product[7](or sum[8]) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithmsof[9][10])probabilities. An extended version of theCYK algorithmcan be used to find the "lightest" (least-weight) derivation of a string given some WCFG. When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set ofprobability distributions.[7] Since the 1990s, PCFG has been applied to modelRNA structures.[11][12][13][14][15] Energy minimization[16][17]and PCFG provide ways of predicting RNA secondary structure with comparable performance.[11][12][1]However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures[6]rather than by experimental determination as is the case with energy minimization methods.[18][19] The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled.[11][12][1]PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structuralmultiple sequence alignmentof related RNAs.[1] The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such asaabaabaa{\displaystyle aabaabaa}is derived by first generating the distala's on both sides before moving inwards: A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA.[6]However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible.[6][20] Every possible stringxa grammar generates is assigned a probability weightP(x|θ){\displaystyle P(x|\theta )}given the PCFG modelθ{\displaystyle \theta }. It follows that the sum of all probabilities to all possible grammar productions is∑xP(x|θ)=1{\displaystyle \sum _{\text{x}}P(x|\theta )=1}. The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds.[1][6] RNA secondary structure implementations based on PCFG approaches can be utilized in : Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences,[20]covariance models are used in searching databases for homologous sequences and RNA annotation and classification,[11][24]RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs.[25][26][27] PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale.[1]A grammar based model should be able to: The resulting of multipleparse treesper grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure. Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores.[6]Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one.[28][29][30]In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique.[31]Grammar ambiguity can be checked for by the conditional-inside algorithm.[1][6] A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminalS{\displaystyle \mathbf {\mathit {S}} }produces loops. The rest of the grammar proceeds with parameterL{\displaystyle \mathbf {\mathit {L}} }that decide whether a loop is a start of a stem or a single stranded regionsand parameterF{\displaystyle \mathbf {\mathit {F}} }that produces paired bases. The formalism of this simple PCFG looks like: The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of astructural alignmentin the production rules of the PCFG facilitates good prediction accuracy.[21] A summary of general steps for utilizing PCFGs in various scenarios: Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can followexpectation-maximizationparadigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence.[32][33]CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actualCYK algorithmused in non-probabilistic CFGs.[1] The inside algorithm calculatesα(i,j,v){\displaystyle \alpha (i,j,v)}probabilities for alli,j,v{\displaystyle i,j,v}of a parse subtree rooted atWv{\displaystyle W_{v}}for subsequencexi,...,xj{\displaystyle x_{i},...,x_{j}}. Outside algorithm calculatesβ(i,j,v){\displaystyle \beta (i,j,v)}probabilities of a complete parse tree for sequencexfrom root excluding the calculation ofxi,...,xj{\displaystyle x_{i},...,x_{j}}. The variablesαandβrefine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products ofαandβdivided by the probability for a sequencexgiven the modelP(x|θ){\displaystyle P(x|\theta )}. It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values ofαandβ.[32][33]The CYK algorithm calculatesγ(i,j,v){\displaystyle \gamma (i,j,v)}to find the most probable parse treeπ^{\displaystyle {\hat {\pi }}}and yieldslog⁡P(x,π^|θ){\displaystyle \log P(x,{\hat {\pi }}|\theta )}.[1] Memory and time complexity for general PCFG algorithms in RNA structure predictions areO(L2M){\displaystyle O(L^{2}M)}andO(L3M3){\displaystyle O(L^{3}M^{3})}respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods. Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure.[11][12]The RNA analysis package Infernal uses such profiles in inference of RNA alignments.[34]The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information.[24] CMs are designed from a consensus RNA structure. A CM allowsindelsof unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered.[1]Grammars in a CM are as follows: The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves.[1] In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree -log⁡e^{\displaystyle \log {\hat {e}}}- are calculated out of the emitting statesP,L,R{\displaystyle P,~L,~R}. Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score-log⁡P(x,π^|θ){\displaystyle \log {\text{P}}(x,{\hat {\pi }}|\theta )}- is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity ofO(MaD+MbD2){\displaystyle O(M_{a}D+M_{b}D^{2})}.[1] The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure.[20]In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset. In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems. For basepairXandYan occurrence ofXY{\displaystyle XY}is also counted as an occurrence ofYX{\displaystyle YX}. Identical basepairs such asXX{\displaystyle XX}are counted twice. By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences. First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different basesX, Ythe count of the difference is incremented for each sequence. For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible:[35] For basepairs a 16 X 16 rate distribution matrix is similarly generated.[36][37]The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm.[20] After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any columnCin a secondary structureσ{\displaystyle \sigma }for a sequenceDof lengthlsuch thatD=(C1,C2,...Cl){\displaystyle D=(C_{1},~C_{2},...C_{l})}can be scored with respect to the alignment treeTand the mutational modelM. The prior distribution given by the PCFG isP(σ|M){\displaystyle P(\sigma |M)}. The phylogenetic tree,Tcan be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done throughdynamic programming.[38] Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy.[21][32][33]The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%.[20]For instance: Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizingP(σ|D,T,M){\displaystyle P(\sigma |D,T,M)}through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure.[20] PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy.[20] Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of theamino acidalphabet and the variety of interactions seen in proteins make grammar inference much more challenging.[39]As a consequence, most applications offormal language theoryto protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions.[40][41]Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG.[39]Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns.
https://en.wikipedia.org/wiki/Probabilistic_parsing
Standardization(American English) orstandardisation(British English) is the process of implementing and developingtechnical standardsbased on the consensus of different parties that include firms, users, interest groups, standards organizations and governments.[1]Standardization can help maximizecompatibility,interoperability,safety,repeatability,efficiency, andquality. It can also facilitate a normalization of formerly custom processes. Insocial sciences, includingeconomics,[2]the idea ofstandardizationis close to the solution for acoordination problem, a situation in which all parties can realize mutual gains, but only by making mutually consistent decisions. Divergent national standards impose costs on consumers and can be a form ofnon-tariff trade barrier.[3] Standard weights and measures were developed by theIndus Valley civilization.[4]The centralized weight and measure system served the commercial interest of Indus merchants as smaller weight measures were used to measure luxury goods while larger weights were employed for buying bulkier items, such as food grains etc.[5]Weights existed in multiples of a standard weight and in categories.[5]Technical standardisationenabled gauging devices to be effectively used inangular measurementand measurement for construction.[6]Uniform units of length were used in the planning of towns such asLothal,Surkotada,Kalibangan,Dolavira,Harappa, andMohenjo-daro.[4]The weights and measures of the Indus civilization also reachedPersiaandCentral Asia, where they were further modified.[7]Shigeo Iwata describes the excavated weights unearthed from the Indus civilization: A total of 558 weights were excavated from Mohenjodaro, Harappa, andChanhu-daro, not including defective weights. They did not find statistically significant differences between weights that were excavated from five different layers, each measuring about 1.5 m in depth. This was evidence that strong control existed for at least a 500-year period. The 13.7-g weight seems to be one of the units used in the Indus valley. The notation was based on thebinaryanddecimalsystems. 83% of the weights which were excavated from the above three cities were cubic, and 68% were made ofchert.[4] The implementation of standards in industry and commerce became highly important with the onset of theIndustrial Revolutionand the need for high-precisionmachine toolsandinterchangeable parts. Henry Maudslaydeveloped the first industrially practicalscrew-cutting lathein 1800. This allowed for the standardization ofscrew threadsizes for the first time and paved the way for the practical application ofinterchangeability(an idea that was already taking hold) tonutsandbolts.[8] Before this, screw threads were usually made by chipping and filing (that is, with skilled freehand use ofchiselsandfiles).Nutswere rare; metal screws, when made at all, were usually for use in wood. Metal bolts passing through wood framing to a metal fastening on the other side were usually fastened in non-threaded ways (such as clinching or upsetting against a washer). Maudslay standardized the screw threads used in his workshop and produced sets oftaps and diesthat would make nuts and bolts consistently to those standards, so that any bolt of the appropriate size would fit any nut of the same size. This was a major advance in workshop technology.[9] Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization; some companies' in-house standards spread a bit within their industries. Joseph Whitworth's screw thread measurements were adopted as the first (unofficial) national standard by companies around the country in 1841. It came to be known as theBritish Standard Whitworth, and was widely adopted in other countries.[10][11] This new standard specified a 55° thread angle and a thread depth of 0.640327pand a radius of 0.137329p, wherepis the pitch. The thread pitch increased with diameter in steps specified on a chart. An example of the use of the Whitworth thread is theRoyal Navy'sCrimean Wargunboats. These were the first instance of "mass-production" techniques being applied to marine engineering.[8] With the adoption of BSW by Britishrailwaylines, many of which had previously used their own standard both for threads and for bolt head and nut profiles, and improving manufacturing techniques, it came to dominate British manufacturing. American Unified Coarsewas originally based on almost the same imperial fractions. The Unified thread angle is 60° and has flattened crests (Whitworth crests are rounded). Thread pitch is the same in both systems except that the thread pitch for the1⁄2in. (inch) bolt is 12 threads per inch (tpi) in BSW versus 13 tpi in the UNC. By the end of the 19th century, differences in standards between companies were making trade increasingly difficult and strained. For instance, an iron and steel dealer recorded his displeasure inThe Times: "Architects and engineers generally specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible. In this country no two professional men are agreed upon the size and weight of a girder to employ for given work." TheEngineering Standards Committeewas established in London in 1901 as the world's first national standards body.[12][13]It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929. The national standards were adopted universally throughout the country, and enabled the markets to act more rationally and efficiently, with an increased level of cooperation. After theFirst World War, similar national bodies were established in other countries. TheDeutsches Institut für Normungwas set up in Germany in 1917, followed by its counterparts, the AmericanNational Standard Instituteand the FrenchCommission Permanente de Standardisation, both in 1918.[8] At a regional level (e.g. Europa, the Americas, Africa, etc) or at subregional level (e.g. Mercosur, Andean Community, South East Asia, South East Africa, etc), several Regional Standardization Organizations exist (see alsoStandards Organization). The three regional standards organizations in Europe – European Standardization Organizations (ESOs), recognised by the EU Regulation on Standardization (Regulation (EU) 1025/2012)[14]– areCEN,CENELECandETSI. CEN develops standards for numerous kinds of products, materials, services and processes. Some sectors covered by CEN include transport equipment and services, chemicals, construction, consumer products, defence and security, energy, food and feed, health and safety, healthcare, digital sector, machinery or services.[15]The European Committee for Electrotechnical Standardization (CENELEC) is the European Standardization organization developing standards in the electrotechnical area and corresponding to the International Electrotechnical Commission (IEC) in Europe.[16] The first modernInternational Organization(Intergovernmental Organization) the International Telegraph Union (nowInternational Telecommunication Union) was created in 1865[17]to set international standards in order to connect national telegraph networks, as a merger of two predecessor organizations (Bern and Paris treaties) that had similar objectives, but in more limited territories.[18][19]With the advent of radiocommunication soon after the creation, the work of the ITU quickly expanded from the standardization of Telegraph communications, to developing standards for telecommunications in general. By the mid to late 19th century, efforts were being made to standardize electrical measurement.Lord Kelvinwas an important figure in this process, introducing accurate methods and apparatus for measuring electricity. In 1857, he introduced a series of effective instruments, including the quadrant electrometer, which cover the entire field of electrostatic measurement. He invented thecurrent balance, also known as theKelvin balanceorAmpere balance(SiC), for theprecisespecification of theampere, thestandardunitofelectric current.[20] R. E. B. Cromptonbecame concerned by the large range of different standards and systems used by electrical engineering companies and scientists in the early 20th century. Many companies had entered the market in the 1890s and all chose their own settings forvoltage,frequency,currentand even the symbols used on circuit diagrams. Adjacent buildings would have totally incompatible electrical systems simply because they had been fitted out by different companies. Crompton could see the lack of efficiency in this system and began to consider proposals for an international standard for electric engineering.[21] In 1904, Crompton represented Britain at theInternational Electrical Congress, held in connection withLouisiana Purchase ExpositioninSaint Louisas part of a delegation by theInstitute of Electrical Engineers. He presented a paper on standardisation, which was so well received that he was asked to look into the formation of a commission to oversee the process.[22]By 1906 his work was complete and he drew up a permanent constitution for theInternational Electrotechnical Commission.[23]The body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardisation, Lord Kelvin was elected as the body's first President.[24] TheInternational Federation of the National Standardizing Associations(ISA) was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 duringWorld War II. After the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met inLondonand agreed to join forces to create the newInternational Organization for Standardization(ISO); the new organization officially began operations in February 1947.[25] In general, each country or economy has a single recognized National Standards Body (NSB). Examples includeABNT,AENOR (now called UNE,Spanish Association for Standardization),AFNOR,ANSI,BSI,DGN,DIN,IRAM,JISC,KATS,SABS,SAC,SCC,SIS. An NSB is likely the sole member from that economy in ISO. NSBs may be either public or private sector organizations, or combinations of the two. For example, the three NSBs of Canada, Mexico and the United States are respectively the Standards Council of Canada (SCC), the General Bureau of Standards (Dirección General de Normas, DGN), and theAmerican National Standards Institute(ANSI). SCC is a CanadianCrown Corporation, DGN is a governmental agency within the Mexican Ministry of Economy, and ANSI and AENOR are a501(c)(3)non-profit organization with members from both the private and public sectors. The determinants of whether an NSB for a particular economy is a public or private sector body may include the historical and traditional roles that the private sector fills in public affairs in that economy or the development stage of that economy. Standards can be: The existence of a published standard does not necessarily imply that it is useful or correct. Just because an item is stamped with a standard number does not, by itself, indicate that the item is fit for any particular use. The people who use the item or service (engineers, trade unions, etc.) or specify it (building codes, government, industry, etc.) have the responsibility to consider the available standards, specify the correct one, enforce compliance, and use the item correctly:validation and verification. To avoid the proliferation of industry standards, also referred to asprivate standards, regulators in the United States are instructed by their government offices to adopt "voluntary consensus standards" before relying upon "industry standards" or developing "government standards".[26]Regulatory authorities can reference voluntary consensus standards to translate internationally accepted criteria intopublic policy.[27][28] In the context of information exchange, standardization refers to the process of developing standards for specific business processes using specificformal languages. These standards are usually developed in voluntary consensus standards bodies such as the United Nations Center for Trade Facilitation and Electronic Business (UN/CEFACT), the World Wide Web Consortium (W3C), theTelecommunications Industry Association(TIA), and the Organization for the Advancement of Structured Information Standards (OASIS). There are manyspecificationsthat govern the operation and interaction of devices and software on theInternet, which do not use the term "standard" in their names. TheW3C, for example, publishes "Recommendations", and theIETFpublishes "Requests for Comments" (RFCs). Nevertheless, these publications are often referred to as "standards", because they are the products of regular standardization processes. Standardized product certificationssuch as oforganic food,buildingsorpossibly sustainable seafoodas well as standardized product safety evaluation and dis/approval procedures (e.g.regulation of chemicals,cosmeticsandfood safety) can protect the environment.[29][30][31]This effect may depend on associated modifiedconsumer choices, strategic product support/obstruction, requirements and bans as well as their accordance with a scientific basis, the robustness and applicability of a scientific basis, whether adoption of the certifications is voluntary, and the socioeconomic context (systems ofgovernanceand theeconomy), with possibly most certifications being so far mostly largely ineffective.[32][additional citation(s) needed] Moreover, standardized scientific frameworks can enable evaluation of levels of environmental protection, such as ofmarine protected areas, and serve as, potentially evolving, guides for improving, planning and monitoring the protection-quality, -scopes and -extents.[33] Moreover, technical standards could decreaseelectronic waste[34][35][36]and reduce resource-needs such as by thereby requiring (or enabling) products to beinteroperable, compatible (with other products, infrastructures, environments, etc),durable,energy-efficient,modular,[37]upgradeable/repairable[38]andrecyclableand conform to versatile, optimal standards and protocols. Such standardization is not limited to the domain of electronic devices like smartphones and phone chargers but could also be applied to e.g. the energy infrastructure.Policy-makers could developpolicies "fostering standard design and interfaces, and promoting the re-use of modules and components across plants to develop more sustainableenergy infrastructure".[39]Computers and the Internet are some of the tools that could be used to increase practicability and reduce suboptimal results, detrimental standards andbureaucracy, which is often associated with traditional processes and results of standardization.[40]Taxes and subsidies, and funding of research and development could be used complementarily.[41]Standardized measurement is used in monitoring, reporting and verification frameworks of environmental impacts, usually of companies, for example to prevent underreporting of greenhouse gas emissions by firms.[42] In routineproduct testingandproduct analysisresults can be reported using official or informal standards. It can be done to increaseconsumer protection, to ensure safety or healthiness or efficiency or performance or sustainability of products. It can be carried out by the manufacturer, an independent laboratory, a government agency, a magazine or others on a voluntary or commissioned/mandated basis.[43][44][additional citation(s) needed] Estimating theenvironmental impacts of food productsin a standardized way – as has been done witha datasetof >57,000 foodproductsin supermarkets – could e.g. be used to inform consumers or inpolicy.[45][46]For example, such may be useful for approaches usingpersonal carbon allowances(or similar quota) or fortargeted alteration of (ultimate overall) costs. Public informationsymbols(e.g.hazard symbols), especially when related to safety, are often standardized, sometimeson the international level.[47] Standardization is also used to ensure safe design and operation of laboratories and similar potentially dangerous workplaces, e.g. to ensurebiosafety levels.[48]There is research into microbiology safety standards used in clinical and research laboratories.[49] In the context of defense, standardization has been defined byNATOasThe development and implementation of concepts, doctrines, procedures and designs to achieve and maintain the required levels ofcompatibility,interchangeabilityorcommonalityin the operational, procedural, material, technical and administrative fields to attain interoperability.[50] In some cases, standards are being used in the design and operation ofworkplacesand products that can impact consumers' health. Some of such standards seek to ensureoccupational safety and healthandergonomics. For example,chairs[47][51][52][53](see e.g.active sittingandsteps of research) could be potentially be designed and chosen using standards that may or may not be based on adequate scientific data. Standards could reduce the variety of products and lead to convergence on fewer broad designs – which can often be efficiently mass-produced via common shared automated procedures and instruments – or formulations deemed to be the most healthy, most efficient or best compromise between healthiness and other factors. Standardization is sometimes or could also be used to ensure or increase or enable consumer health protection beyond the workplace and ergonomics such as standards in food, food production, hygiene products, tab water, cosmetics, drugs/medicine,[54]drink and dietary supplements,[55][56]especially in cases where there is robust scientific data that suggests detrimental impacts on health (e.g. of ingredients) despite being substitutable and not necessarily of consumer interest.[additional citation(s) needed] In the context of assessment, standardization may define how a measuring instrument or procedure is similar to every subjects or patients.[57]: 399[58]: 71For example, educational psychologist may adoptstructured interviewto systematically interview the people in concern. By delivering the same procedures, all subjects is evaluated using same criteria and minimising anyconfounding variablethat reduce thevalidity.[58]: 72Some other example includesmental status examinationandpersonality test. In the context of social criticism andsocial science, standardization often means the process of establishing standards of various kinds and improving efficiency to handle people, their interactions, cases, and so forth. Examples include formalization of judicial procedure in court, and establishing uniform criteria for diagnosing mental disease. Standardization in this sense is often discussed along with (or synonymously to) such large-scale social changes as modernization, bureaucratization, homogenization, and centralization of society. In the context ofcustomer service, standardization refers to the process of developing an international standard that enables organizations to focus on customer service, while at the same time providing recognition of success[clarification needed]through a third party organization, such as theBritish Standards Institution. An international standard has been developed byThe International Customer Service Institute. In the context ofsupply chain managementandmaterials management, standardization covers the process of specification and use of any item the company must buy in or make, allowable substitutions, andbuild or buydecisions. The process of standardization can itself be standardized. There are at least four levels of standardization: compatibility,interchangeability,commonalityandreference. These standardization processes create compatibility, similarity, measurement, and symbol standards. There are typically four different techniques for standardization Types of standardization process: Standardization has a variety of benefits and drawbacks for firms and consumers participating in the market, and on technology and innovation. The primary effect of standardization on firms is that the basis of competition is shifted from integrated systems to individual components within the system. Prior to standardization a company's product must span the entire system because individual components from different competitors are incompatible, but after standardization each company can focus on providing an individual component of the system.[60]When the shift toward competition based on individual components takes place, firms selling tightly integrated systems must quickly shift to a modular approach, supplying other companies with subsystems or components.[61] Standardization has a variety of benefits for consumers, but one of the greatest benefits is enhanced network effects. Standards increase compatibility and interoperability between products, allowing information to be shared within a larger network and attracting more consumers to use the new technology, further enhancing network effects.[62]Other benefits of standardization to consumers are reduced uncertainty, because consumers can be more certain that they are not choosing the wrong product, and reduced lock-in, because the standard makes it more likely that there will be competing products in the space.[63]Consumers may also get the benefit of being able to mix and match components of a system to align with their specific preferences.[64]Once these initial benefits of standardization are realized, further benefits that accrue to consumers as a result of using the standard are driven mostly by the quality of the technologies underlying that standard.[65] Probably the greatest downside of standardization for consumers is lack of variety. There is no guarantee that the chosen standard will meet all consumers' needs or even that the standard is the best available option.[64]Another downside is that if a standard is agreed upon before products are available in the market, then consumers are deprived of the penetration pricing that often results when rivals are competing to rapidly increase market share in an attempt to increase the likelihood that their product will become the standard.[64]It is also possible that a consumer will choose a product based upon a standard that fails to become dominant.[66]In this case, the consumer will have spent resources on a product that is ultimately less useful to him or her as the result of the standardization process. Much like the effect on consumers, the effect of standardization on technology and innovation is mixed.[67]Meanwhile, the various links between research and standardization have been identified,[68]also as a platform of knowledge transfer[69]and translated into policy measures (e.g.WIPANO). Increased adoption of a new technology as a result of standardization is important because rival and incompatible approaches competing in the marketplace can slow or even kill the growth of the technology (a state known asmarket fragmentation).[70]The shift to a modularized architecture as a result of standardization brings increased flexibility, rapid introduction of new products, and the ability to more closely meet individual customer's needs.[71] The negative effects of standardization on technology have to do with its tendency to restrict new technology and innovation. Standards shift competition from features to price because the features are defined by the standard. The degree to which this is true depends on the specificity of the standard.[72]Standardization in an area also rules out alternative technologies as options while encouraging others.[73]
https://en.wikipedia.org/wiki/Standardization
Instatisticalanalysis ofbinary classificationandinformation retrievalsystems, theF-scoreorF-measureis a measure of predictive performance. It is calculated from theprecisionandrecallof the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known aspositive predictive value, and recall is also known assensitivityin diagnostic binary classification. TheF1score is theharmonic meanof the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more genericFβ{\displaystyle F_{\beta }}score applies additional weights, valuing one of precision or recall more than the other. The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if the precision or the recall is zero. The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the FourthMessage Understanding Conference(MUC-4, 1992).[1] The traditional F-measure or balanced F-score (F1score) is theharmonic meanof precision and recall:[2] Withprecision = TP / (TP + FP)andrecall = TP / (TP + FN), it follows that the numerator ofF1is the sum of their numerators and the denominator ofF1is the sum of their denominators. A more general F score,Fβ{\displaystyle F_{\beta }}, that uses a positive real factorβ{\displaystyle \beta }, whereβ{\displaystyle \beta }is chosen such that recall is consideredβ{\displaystyle \beta }times as important as precision, is: In terms ofType I and type II errorsthis becomes: Two commonly used values forβ{\displaystyle \beta }are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision. The F-measure was derived so thatFβ{\displaystyle F_{\beta }}"measures the effectiveness of retrieval with respect to a user who attachesβ{\displaystyle \beta }times as much importance to recall as precision".[3]It is based onVan Rijsbergen's effectiveness measure Their relationship is:Fβ=1−E{\displaystyle F_{\beta }=1-E}whereα=11+β2{\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}} This is related to the field ofbinary classificationwhere recall is often termed "sensitivity". Precision-recall curve, and thus theFβ{\displaystyle F_{\beta }}score, explicitly depends on the ratior{\displaystyle r}of positive to negative test cases.[12]This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al., 2020[13]) is to use a standard class ratior0{\displaystyle r_{0}}when making such comparisons. The F-score is often used in the field ofinformation retrievalfor measuringsearch,document classification, andquery classificationperformance.[14]It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class. Earlier works focused primarily on the F1score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[15]and soFβ{\displaystyle F_{\beta }}is seen in wide application. The F-score is also used inmachine learning.[16]However, the F-measures do not take true negatives into account, hence measures such as theMatthews correlation coefficient,InformednessorCohen's kappamay be preferred to assess the performance of a binary classifier.[17] The F-score has been widely used in the natural language processing literature,[18]such as in the evaluation ofnamed entity recognitionandword segmentation. The F1score is theDice coefficientof the set of retrieved items and the set of relevant items.[19] David Handand others criticize the widespread use of the F1score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[22] According to Davide Chicco and Giuseppe Jurman, the F1score is less truthful and informative than theMatthews correlation coefficient (MCC)in binary evaluation classification.[23] David M W Powershas pointed out that F1ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measuresInformednessandMarkednessfor the two directions, noting that their geometric mean is correlation.[24] Another source of critique of F1is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by theP4 metricdefinition, which is sometimes indicated as a symmetrical extension of F1.[25] Finally, Ferrer[26]and Dyrland et al.[27]argue that the expected cost (or its counterpart, the expected utility) is the only principled metric for evaluation of classification decisions, having various advantages over the F-score and the MCC. Both works show that the F-score can result in wrong conclusions about the absolute and relative quality of systems. While the F-measure is theharmonic meanof recall and precision, theFowlkes–Mallows indexis theirgeometric mean.[28] The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). A common method is to average the F-score over each class, aiming at a balanced measurement of performance.[29] Macro F1is a macro-averaged F1 score aiming at a balanced performance measurement. To calculate macro F1, two different averaging-formulas have been used: the F1 score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F1 scores, where the latter exhibits more desirable properties.[30] Micro F1is the harmonic mean ofmicro precisionandmicro recall. In single-label multi-class classification, micro precision equals micro recall, thus micro F1 is equal to both. However, contrary to a common misconception, micro F1 does not generally equalaccuracy, because accuracy takes true negatives into account while micro F1 does not.[31]
https://en.wikipedia.org/wiki/F-score
ASwadesh list(/ˈswɑːdɛʃ/) is a compilation oftentatively universalconcepts for the purposes oflexicostatistics. That is, a Swadesh list is a list of forms and concepts which all languages, without exception, have terms for, such as star, hand, water, kill, sleep, and so forth. The number of such terms is small – a few hundred at most, or possibly less than a hundred. The inclusion or exclusion of many terms is subject to debate among linguists; thus, there are several different lists, and some authors may refer to "Swadesh lists". The Swadesh list is named after linguistMorris Swadesh. Translations of a Swadesh list into a set of languages allow for researchers to quantify the interrelatedness of those languages. Swadesh lists are used inlexicostatistics(the quantitative assessment of the genealogical relatedness of languages) andglottochronology(the dating of language divergence). For instance, the terms on a Swadesh list can be compared between two languages (since both languages will have them) to see if they are related and how closely, thus giving useful information that can be further applied to comparison of the languages. (Actual lexicostatistics is quite complicated, and usually sets of languages are compared.) Morris Swadeshcreated several versions of his list. He started[1]with a list of 215 meanings (falsely introduced as a list of 225 meanings in the paper due to a spelling error[2]), which he reduced to 165 words for theSalish-Spokane-Kalispel language. In 1952, he published a list of 215 meanings,[3]of which he suggested the removal of 16 for being unclear or notuniversal, with one added to arrive at 200 words. In 1955,[4]he wrote, "The only solution appears to be a drastic weeding out of the list, in the realization that quality is at least as important as quantity. Even the new list has defects, but they are relatively mild and few in number." After minor corrections, the final 100-word list was published posthumously in 1971[5]and 1972. Other versions of lexicostatistical test lists were published e.g. byRobert Lees(1953), John A. Rea (1958:145f),Dell Hymes(1960:6), E. Cross (1964 with 241 concepts), W. J. Samarin (1967:220f), D. Wilson (1969 with 57 meanings),Lionel Bender(1969), R. L. Oswald (1971),Winfred P. Lehmann(1984:35f), D. Ringe (1992, passim, different versions),Sergei Starostin(1984, passim, different versions),William S-Y. Wang(1994), M. Lohr (2000, 128 meanings in 18 languages). B. Kessler (2002), and many others. TheConcepticon,[6]a project hosted at theCross-Linguistic Linked Data(CLLD) project, collects various concept lists (including classical Swadesh lists) across different linguistic areas and times, currently listing 240 different concept lists.[7] Frequently used and widely available on the internet, is the version byIsidore Dyen(1992, 200 meanings of 95 language variants). Since 2010, a team aroundMichael Dunnhas tried to update and enhance that list.[8] In origin, the words in the Swadesh lists were chosen for their universal, culturally independent availability in as many languages as possible, regardless of their stability (how prone the word is to changing, as all words do over time to a greater or lesser extent, which can includeborrowingfrom another language). However, stability may be important. The stability of terms on a Swadesh list under language change and the potential use of this fact for purposes ofglottochronology(study of how languages develop and branch apart over time) have been analyzed by numerous authors, including Marisa Lohr 1999, 2000.[9] The Swadesh list was put together by Morris Swadesh on the basis of his intuition. Similar more recent lists, such as theDolgopolsky list(1964) or theLeipzig–Jakarta list(2009), are based on systematic data from many different languages, but they are not yet as widely known nor as widely used as the Swadesh list. Lexicostatistical test lists are used inlexicostatisticsto define subgroupings of languages, and inglottochronologyto "provide dates for branching points in the tree".[10]The task of defining (and counting the number) of cognate words in the list is far from trivial, and often is subject to dispute, because cognates do not necessarily look similar, and recognition of cognates presupposes knowledge of thesound lawsof the respective languages. Swadesh's final list, published in 1971,[5]contains 100 terms. Explanations of the terms can be found in Swadesh 1952[3]or, where noted by a dagger (†), in Swadesh 1955. Note that only this original sequence clarifies the correct meaning which is lost in an alphabetical order, e.g., in the case "27. bark" (originally without the specification here added). ^"Claw" was only added in 1955, but again replaced by many well-known specialists with(finger)nail, because expressions for "claw" are not available in many old, extinct, or lesser known languages. The 110-itemGlobal Lexicostatistical Databaselist uses the original 100-item Swadesh list, in addition to 10 other words from the Swadesh–Yakhontov list.[11] The most used list nowadays is the Swadesh 207-word list, adapted from Swadesh 1952.[3] In Wiktionary ("Swadesh lists by language"), Panlex[12][13]and in Palisto's "Swadesh Word List of Indo-European languages",[14]hundreds of Swadesh lists in this form can be found. TheSwadesh–Yakhontov listis a 35-word subset of the Swadesh list posited as especially stable by Russian linguistSergei Yakhontovaround the 1960s, although the list was only officially published in 1991.[15]It has been used inlexicostatisticsby linguists such asSergei Starostin. With their Swadesh numbers, they are:[16] Holmanet al.(2008) found that in identifying the relationships betweenChinese dialectsthe Swadesh–Yakhontov list was less accurate than the original Swadesh-100 list. Further they found that a different (40-word) list (also known as theASJP list) was just as accurate as the Swadesh-100 list. However, they calculated the relative stability of the words by comparing retentions between languages in established language families. They found no statistically significant difference in the correlations in the families of the Old versus the New World. The ranked Swadesh-100 list, with Swadesh numbers and relative stability, is as follows (Holmanet al.,Appendix. Asterisked words appear on the 40-word list): In studying thesign languages of VietnamandThailand, linguist James Woodward noted that the traditional Swadesh list applied to spoken languages was unsuited forsign languages. The Swadesh list results in overestimation of the relationships between sign languages, due to indexical signs such as pronouns and parts of the body. The modified list is as follows, in mostly alphabetical order:[17]
https://en.wikipedia.org/wiki/Swadesh_list
Polysemy(/pəˈlɪsɪmi/or/ˈpɒlɪˌsiːmi/;[1][2]fromAncient Greekπολύ-(polý-)'many'andσῆμα(sêma)'sign') is the capacity for asign(e.g. asymbol,morpheme,word, orphrase) to have multiple related meanings. For example, a word can have severalword senses.[3]Polysemy is distinct frommonosemy, where a word has a single meaning.[3] Polysemy is distinct fromhomonymy—orhomophony—which is anaccidentalsimilarity between two or more words (such asbearthe animal, and the verbbear); whereas homonymy is a mere linguistic coincidence, polysemy is not. In discerning whether a given set of meanings represent polysemy or homonymy, it is often necessary to look at the history of the word to see whether the two meanings are historically related.Dictionary writersoften listpolysemes(words or phrases with different, but related, senses) in the same entry (that is, under the sameheadword) and enter homonyms as separate headwords (usually with a numbering convention such as¹bearand²bear). A polyseme is a word or phrase with different, but related,senses. Since the test for polysemy is the vague concept of the relatedness, judgments of polysemy can be difficult to make. Because applying pre-existing words to new situations is a natural process of language change, looking at words'etymologyis helpful in determining polysemy but not the only solution; as words become lost in etymology, what once was a useful distinction of meaning may no longer be so. Some seemingly unrelated words share a common historical origin, however, so etymology is not an infallible test for polysemy, and dictionary writers also often defer to speakers' intuitions to judge polysemy in cases where it contradicts etymology.[4]English has many polysemous words. For example, the verb "toget" can mean "procure" (I'll get the drinks), "become" (she got scared), "understand" (I get it) etc. In linear or vertical polysemy, one sense of a word is a subset of the other. These are examples ofhyponymy and hypernymy, and are sometimes called autohyponyms.[5]For example, 'dog' can be used for 'male dog'. Alan Cruse identifies four types of linear polysemy:[6] In non-linear polysemy, the original sense of a word is used figuratively to provide a different way of looking at the new subject. Alan Cruse identifies three types of non-linear polysemy:[6] There are several tests for polysemy, but one of them iszeugma: if one word seems to exhibit zeugma when applied in differentcontexts, it is probable that the contexts bring out different polysemes of the same word. If the two senses of the same word do not seem tofit,yet seem related, then it is probable that they are polysemous. This test again depends on speakers' judgments about relatedness, which means that it is not infallible, but merely a helpful conceptual aid. The difference betweenhomonymsand polysemes is subtle.Lexicographersdefine polysemes within a single dictionarylemma, while homonyms are treated in separate entries, numbering different meanings (or lemmata).Semantic shiftcan separate a polysemous word into separate homonyms. For example,checkas in "bank check" (orCheque),checkin chess, andcheckmeaning "verification" are considered homonyms, while they originated as a single word derived fromchessin the 14th century. Psycholinguistic experiments have shown that homonyms and polysemes are represented differently within people's mentallexicon: while the different meanings of homonyms (which are semantically unrelated) tend to interfere or compete with each other during comprehension, this does not usually occur for the polysemes that have semantically related meanings.[4][7][8][9]Results for this contention, however, have been mixed.[10][11][12][13] ForDick Hebdige,[14]polysemy means that, "each text is seen to generate a potentially infinite range of meanings," making, according toRichard Middleton,[15]"any homology, out of the most heterogeneous materials, possible. The idea ofsignifying practice—texts not as communicating or expressing a pre-existing meaning but as 'positioning subjects' within aprocessofsemiosis—changes the whole basis of creating social meaning". Charles FillmoreandBeryl Atkins'definition stipulates three elements: (i) the various senses of a polysemous word have a central origin, (ii) the links between these senses form a network, and (iii) understanding the 'inner' one contributes to understanding of the 'outer' one.[16] One group of polysemes are those in which a word meaning an activity, perhaps derived from a verb, acquires the meanings of those engaged in the activity, or perhaps the results of the activity, or the time or place in which the activity occurs or has occurred. Sometimes only one of those meanings is intended, depending oncontext, and sometimes multiple meanings are intended at the same time. Other types are derivations from one of the other meanings that leads to a verb or activity. This example shows the specific polysemy where the same word is used at different levels of ataxonomy. According to theOxford English Dictionary, the three most polysemous words inEnglisharerun,put, andset, in that order.[18][19] A notion related to polysemy iscolexification– namely, the case when several meanings are expressed by the same word.[20]The main difference between the two notions is one of perspective:polysemyis usually taken in asemasiological way, going from a form to its meanings; whereascolexificationisonomasiological, starting from individual meanings and observing how they are colexified (or its opposite,dislexified) in languages. A lexical conception of polysemy was developed byB. T. S. Atkins, in the form of lexical implication rules.[21]These are rules that describe how words, in one lexical context, can then be used, in a different form, in a related context. A crude example of such a rule is the pastoral idea of "verbizing one's nouns": that certain nouns, used in certain contexts, can be converted into a verb, conveying a related meaning.[22] Another clarification of polysemy is the idea ofpredicate transfer[23]—the reassignment of a property to an object that would not otherwise inherently have that property. Thus, the expression "I am parked out back" conveys the meaning of "parked" from "car" to the property of "I possess a car". This avoids incorrect polysemous interpretations of "parked": that "people can be parked", or that "I am pretending to be a car", or that "I am something that can be parked". This is supported by themorphology: "We are parked out back" does not mean that there are multiple cars; rather, that there are multiple passengers (having the property of being in possession of a car).
https://en.wikipedia.org/wiki/Polysemy
ASmurf attackis adistributed denial-of-service attackin which large numbers ofInternet Control Message Protocol(ICMP) packets with the intended victim'sspoofedsource IP are broadcast to acomputer networkusing an IPbroadcast address.[1]Most devices on a network will, by default, respond to this by sending a reply to the source IP address. If the number of machines on the network that receive and respond to these packets is very large, the victim's computer will be flooded with traffic. This can slow down the victim's computer to the point where it becomes impossible to work on. The original tool for creating a Smurf attack was written by Dan Moschuk (alias TFreak) in 1997.[2][3] In the late 1990s, many IP networks would participate in Smurf attacks if prompted (that is, they would respond to ICMP requests sent to broadcast addresses). The name comes from the idea of very small, but numerous attackers overwhelming a much larger opponent (seeSmurfs). Today, administrators can make a network immune to such abuse; therefore, very few networks remain vulnerable to Smurf attacks.[4] ASmurf amplifieris a computer network that lends itself to being used in a Smurf attack. Smurf amplifiers act to worsen the severity of a Smurf attack because they are configured in such a way that they generate a large number ofICMPreplies to the victim at the spoofed source IP address. In DDoS,amplificationis the degree of bandwidth enhancement that an original attack traffic undergoes (with the help of Smurf amplifiers) during its transmission towards the victim computer. An amplification factor of 100, for example, means that an attacker could manage to create 100 Mb/s of traffic using just 1 Mb/s of its own bandwidth.[5] Under the assumption no countermeasures are taken to dampen the effect of a Smurf attack, this is what happens in the target network withnactive hosts (that will respond to ICMP echo requests). The ICMP echo request packets have a spoofed source address (the Smurfs' target) and a destination address (the patsy; the apparent source of the attack). Both addresses can take two forms:unicastandbroadcast. The dual unicast form is comparable with a regular ping: an ICMP echo request is sent to the patsy (a single host), which sends a single ICMP echo reply (a Smurf) back to the target (the single host in the source address). This type of attack has an amplification factor of 1, which means: just a single Smurf per ping. When the target is a unicast address and the destination is the broadcast address of the target's network, then all hosts in the network will receive an echo request. In return they will each reply to the target, so the target is swamped withnSmurfs. Amplification factor =n. Ifnis small, a host may be hindered but not crippled. Ifnis large, a host may come to a halt. If the target is the broadcast address and the patsy a unicast address, each host in the network will receive a single Smurf per ping, so an amplification factor of 1 per host, but a factor ofnfor the network. Generally, a network would be able to cope with this form of the attack, ifnis not too great. When both the source and destination address in the original packet are set to the broadcast address of the target network, things start to get out of hand quickly. All hosts receive an echo request, but all replies to that are broadcast again to all hosts. Each host will receive an initial ping, broadcast the reply and get a reply from alln-1hosts. An amplification factor ofnfor a single host, but an amplification factor ofn2for the network. ICMP echo requests are typically sent once a second. The reply should contain the contents of the request; a few bytes, normally. A single (double broadcast) ping to a network with 100 hosts causes the network to process10000packets. If the payload of the ping is increased to15000bytes (or 10 full packets inEthernet) then that ping will cause the network to have to process100000large packets per second. Send more packets per second, and any network would collapse under the load. This will render any host in the network unreachable for as long as the attack lasts. A Smurf attack can overwhelm servers and networks. The bandwidth of the communication network can be exhausted resulting in the communication network becoming paralyzed.[6] The fix is two-fold: It's also important for ISPs to implementingress filtering, which rejects the attacking packets on the basis of the forged source address.[8] An example of configuring a router so it will not forward packets to broadcast addresses, for aCiscorouter, is: (This example does not protect a network from becoming thetargetof a Smurf attack; it merely prevents the network fromparticipatingin a Smurf attack.) A Fraggle attack (named for the creatures in the puppet TV seriesFraggle Rock) is a variation of a Smurf attack where an attacker sends a large amount ofUDPtraffic to ports 7 (Echo) and 19 (CHARGEN). It works similarly to the Smurf attack in that many computers on the network will respond to this traffic by sending traffic back to the spoofed source IP of the victim, flooding it with traffic.[10] Fraggle.c, thesource codeof the attack, was also released by TFreak.[11]
https://en.wikipedia.org/wiki/Smurf_attack
Asecure cryptoprocessoris a dedicatedcomputer-on-a-chipormicroprocessorfor carrying outcryptographicoperations, embedded in a packaging with multiplephysical securitymeasures, which give it a degree oftamper resistance. Unlike cryptographic processors that output decrypted data onto a bus in a secure environment, a secure cryptoprocessor does not output decrypted data or decrypted program instructions in an environment where security cannot always be maintained. The purpose of a secure cryptoprocessor is to act as the keystone of a security subsystem, eliminating the need to protect the rest of the subsystem with physical security measures.[1] Ahardware security module(HSM) contains one or more secure cryptoprocessorchips.[2][3][4]These devices are high grade secure cryptoprocessors used with enterprise servers. A hardware security module can have multiple levels of physical security with a single-chip cryptoprocessor as its most secure component. The cryptoprocessor does not reveal keys or executable instructions on a bus, except in encrypted form, and zeros keys by attempts at probing or scanning. The crypto chip(s) may also bepottedin the hardware security module with other processors and memory chips that store and process encrypted data. Any attempt to remove the potting will cause the keys in the crypto chip to be zeroed. A hardware security module may also be part of a computer (for example anATM) that operates inside a locked safe to deter theft, substitution, and tampering. Modernsmartcardsare probably the most widely deployed form of secure cryptoprocessor, although more complex and versatile secure cryptoprocessors are widely deployed in systems such asAutomated teller machines, TVset-top boxes, military applications, and high-security portable communication equipment.[citation needed]Some secure cryptoprocessors can even run general-purpose operating systems such asLinuxinside their security boundary. Cryptoprocessors input program instructions in encrypted form, decrypt the instructions to plain instructions which are then executed within the same cryptoprocessor chip where the decrypted instructions are inaccessibly stored. By never revealing the decrypted program instructions, the cryptoprocessor prevents tampering of programs by technicians who may have legitimate access to the sub-system data bus. This is known asbus encryption. Data processed by a cryptoprocessor is also frequently encrypted. TheTrusted Platform Module(TPM) is an implementation of a secure cryptoprocessor that brings the notion oftrusted computingto ordinaryPCsby enabling asecure environment.[citation needed]Present TPM implementations focus on providing a tamper-proof boot environment, and persistent and volatile storage encryption. Security chips for embedded systems are also available that provide the same level of physical protection for keys and other secret material as a smartcard processor or TPM but in a smaller, less complex and less expensive package.[citation needed]They are often referred to as cryptographicauthenticationdevices and are used to authenticate peripherals, accessories and/or consumables. Like TPMs, they are usually turnkey integrated circuits intended to be embedded in a system, usually soldered to a PC board. Security measures used in secure cryptoprocessors: Secure cryptoprocessors, while useful, are not invulnerable to attack, particularly for well-equipped and determined opponents (e.g. a government intelligence agency) who are willing to expend enough resources on the project.[5][6] One attack on a secure cryptoprocessor targeted theIBM 4758.[7]A team at the University of Cambridge reported the successful extraction of secret information from an IBM 4758, using a combination of mathematics, and special-purposecodebreakinghardware. However, this attack was not practical in real-world systems because it required the attacker to have full access to all API functions of the device. Normal and recommended practices use the integral access control system to split authority so that no one person could mount the attack.[citation needed] While the vulnerability they exploited was a flaw in the software loaded on the 4758, and not the architecture of the 4758 itself, their attack serves as a reminder that a security system is only as secure as its weakest link: the strong link of the 4758 hardware was rendered useless by flaws in the design and specification of the software loaded on it. Smartcards are significantly more vulnerable, as they are more open to physical attack. Additionally, hardware backdoors can undermine security in smartcards and other cryptoprocessors unless investment is made in anti-backdoor design methods.[8] In the case offull disk encryptionapplications, especially when implemented without abootPIN, a cryptoprocessor would not be secure against acold boot attack[9]ifdata remanencecould be exploited to dumpmemorycontents after theoperating systemhas retrieved the cryptographickeysfrom itsTPM. However, if all of the sensitive data is stored only in cryptoprocessor memory and not in external storage, and the cryptoprocessor is designed to be unable to reveal keys or decrypted or unencrypted data on chipbonding padsorsolder bumps, then such protected data would be accessible only by probing the cryptoprocessor chip after removing any packaging and metal shielding layers from the cryptoprocessor chip. This would require both physical possession of the device as well as skills and equipment beyond that of most technical personnel. Other attack methods involve carefully analyzing the timing of various operations that might vary depending on the secret value or mapping the current consumption versus time to identify differences in the way that '0' bits are handled internally vs. '1' bits. Or the attacker may apply temperature extremes, excessively high or low clock frequencies or supply voltage that exceeds the specifications in order to induce a fault. The internal design of the cryptoprocessor can be tailored to prevent these attacks. Some secure cryptoprocessors containdual processorcores and generate inaccessible encryption keys when needed so that even if the circuitry is reverse engineered, it will not reveal any keys that are necessary to securely decrypt software booted from encrypted flash memory or communicated between cores.[10] The first single-chip cryptoprocessor design was forcopy protectionof personal computer software (see US Patent 4,168,396, Sept 18, 1979) and was inspired by Bill Gates'sOpen Letter to Hobbyists. Thehardware security module(HSM), a type of secure cryptoprocessor,[3][4]was invented byEgyptian-AmericanengineerMohamed M. Atalla,[11]in 1972.[12]He invented a high security module dubbed the "Atalla Box" which encryptedPINandATMmessages, and protected offline devices with an un-guessable PIN-generating key.[13]In 1972, he filed apatentfor the device.[14]He foundedAtalla Corporation(nowUtimaco Atalla) that year,[12]and commercialized the "Atalla Box" the following year,[13]officially as the Identikey system.[15]It was acard readerandcustomer identification system, consisting of acard readerconsole, two customerPIN pads, intelligent controller and built-in electronic interface package.[15]It allowed the customer to type in a secret code, which is transformed by the device, using amicroprocessor, into another code for the teller.[16]During atransaction, the customer'saccount number was read by the card reader.[15]It was a success, and led to the wide use of high security modules.[13] Fearful that Atalla would dominate the market, banks andcredit cardcompanies began working on an international standard in the 1970s.[13]TheIBM 3624, launched in the late 1970s, adopted a similar PIN verification process to the earlier Atalla system.[17]Atalla was an early competitor toIBMin the banking security market.[14][18] At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla unveiled an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities ofprocessingonline transactionsand dealing withnetwork security. Designed with the focus of takingbank transactionsonline, the Identikey system was extended to shared-facility operations. It was consistent and compatible with variousswitchingnetworks, and was capable of resetting itself electronically to any one of 64,000 irreversiblenonlinearalgorithmsas directed bycard datainformation. The Interchange Identikey device was released in March 1976.[16]Later in 1979, Atalla introduced the firstnetwork security processor(NSP).[19]Atalla's HSM products protect 250millioncard transactionsevery day as of 2013,[12]and secure the majority of the world's ATM transactions as of 2014.[11]
https://en.wikipedia.org/wiki/Secure_cryptoprocessor
Convex analysisis the branch ofmathematicsdevoted to the study of properties ofconvex functionsandconvex sets, often with applications inconvex minimization, a subdomain ofoptimization theory. A subsetC⊆X{\displaystyle C\subseteq X}of somevector spaceX{\displaystyle X}isconvexif it satisfies any of the following equivalent conditions: Throughout,f:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}will be a map valued in theextended real numbers[−∞,∞]=R∪{±∞}{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}}with adomaindomain⁡f=X{\displaystyle \operatorname {domain} f=X}that is a convex subset of some vector space. The mapf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is aconvex functionif holds for any real0<r<1{\displaystyle 0<r<1}and anyx,y∈X{\displaystyle x,y\in X}withx≠y.{\displaystyle x\neq y.}If this remains true off{\displaystyle f}when the defining inequality (Convexity ≤) is replaced by the strict inequality thenf{\displaystyle f}is calledstrictly convex.[1] Convex functions are related to convex sets. Specifically, the functionf{\displaystyle f}is convex if and only if itsepigraph is a convex set.[2]The epigraphs of extended real-valued functions play a role in convex analysis that is analogous to the role played bygraphsof real-valued function inreal analysis. Specifically, the epigraph of an extended real-valued function provides geometric intuition that can be used to help formula or prove conjectures. The domain of a functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is denoted bydomain⁡f{\displaystyle \operatorname {domain} f}while itseffective domainis the set[2] The functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is calledproperifdom⁡f≠∅{\displaystyle \operatorname {dom} f\neq \varnothing }andf(x)>−∞{\displaystyle f(x)>-\infty }forallx∈domain⁡f.{\displaystyle x\in \operatorname {domain} f.}[2]Alternatively, this means that there exists somex{\displaystyle x}in the domain off{\displaystyle f}at whichf(x)∈R{\displaystyle f(x)\in \mathbb {R} }andf{\displaystyle f}is alsoneverequal to−∞.{\displaystyle -\infty .}In words, a function isproperif its domain is not empty, it never takes on the value−∞,{\displaystyle -\infty ,}and it also is not identically equal to+∞.{\displaystyle +\infty .}Iff:Rn→[−∞,∞]{\displaystyle f:\mathbb {R} ^{n}\to [-\infty ,\infty ]}is aproper convex functionthen there exist some vectorb∈Rn{\displaystyle b\in \mathbb {R} ^{n}}and somer∈R{\displaystyle r\in \mathbb {R} }such that wherex⋅b{\displaystyle x\cdot b}denotes thedot productof these vectors. Theconvex conjugateof an extended real-valued functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}(not necessarily convex) is the functionf∗:X∗→[−∞,∞]{\displaystyle f^{*}:X^{*}\to [-\infty ,\infty ]}from the(continuous) dual spaceX∗{\displaystyle X^{*}}ofX,{\displaystyle X,}and[3] where the brackets⟨⋅,⋅⟩{\displaystyle \left\langle \cdot ,\cdot \right\rangle }denote thecanonical duality⟨x∗,z⟩:=x∗(z).{\displaystyle \left\langle x^{*},z\right\rangle :=x^{*}(z).}Thebiconjugateoff{\displaystyle f}is the mapf∗∗=(f∗)∗:X→[−∞,∞]{\displaystyle f^{**}=\left(f^{*}\right)^{*}:X\to [-\infty ,\infty ]}defined byf∗∗(x):=supz∗∈X∗{⟨x,z∗⟩−f(z∗)}{\displaystyle f^{**}(x):=\sup _{z^{*}\in X^{*}}\left\{\left\langle x,z^{*}\right\rangle -f\left(z^{*}\right)\right\}}for everyx∈X.{\displaystyle x\in X.}IfFunc⁡(X;Y){\displaystyle \operatorname {Func} (X;Y)}denotes the set ofY{\displaystyle Y}-valued functions onX,{\displaystyle X,}then the mapFunc⁡(X;[−∞,∞])→Func⁡(X∗;[−∞,∞]){\displaystyle \operatorname {Func} (X;[-\infty ,\infty ])\to \operatorname {Func} \left(X^{*};[-\infty ,\infty ]\right)}defined byf↦f∗{\displaystyle f\mapsto f^{*}}is called theLegendre-Fenchel transform. Iff:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}andx∈X{\displaystyle x\in X}then thesubdifferential setis For example, in the important special case wheref=‖⋅‖{\displaystyle f=\|\cdot \|}is a norm onX{\displaystyle X}, it can be shown[proof 1]that if0≠x∈X{\displaystyle 0\neq x\in X}then this definition reduces down to: For anyx∈X{\displaystyle x\in X}andx∗∈X∗,{\displaystyle x^{*}\in X^{*},}f(x)+f∗(x∗)≥⟨x∗,x⟩,{\displaystyle f(x)+f^{*}\left(x^{*}\right)\geq \left\langle x^{*},x\right\rangle ,}which is called theFenchel-Young inequality. This inequality is an equality (i.e.f(x)+f∗(x∗)=⟨x∗,x⟩{\displaystyle f(x)+f^{*}\left(x^{*}\right)=\left\langle x^{*},x\right\rangle }) if and only ifx∗∈∂f(x).{\displaystyle x^{*}\in \partial f(x).}It is in this way that the subdifferential set∂f(x){\displaystyle \partial f(x)}is directly related to the convex conjugatef∗(x∗).{\displaystyle f^{*}\left(x^{*}\right).} Thebiconjugateof a functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is the conjugate of the conjugate, typically written asf∗∗:X→[−∞,∞].{\displaystyle f^{**}:X\to [-\infty ,\infty ].}The biconjugate is useful for showing whenstrongorweak dualityhold (via theperturbation function). For anyx∈X,{\displaystyle x\in X,}the inequalityf∗∗(x)≤f(x){\displaystyle f^{**}(x)\leq f(x)}follows from theFenchel–Young inequality. Forproper functions,f=f∗∗{\displaystyle f=f^{**}}if and only iff{\displaystyle f}is convex andlower semi-continuousbyFenchel–Moreau theorem.[3][4] Aconvex minimization(primal)problemis one of the form In optimization theory, theduality principlestates that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. In general given twodual pairsseparatedlocally convex spaces(X,X∗){\displaystyle \left(X,X^{*}\right)}and(Y,Y∗).{\displaystyle \left(Y,Y^{*}\right).}Then given the functionf:X→[−∞,∞],{\displaystyle f:X\to [-\infty ,\infty ],}we can define the primal problem as findingx{\displaystyle x}such that If there are constraint conditions, these can be built into the functionf{\displaystyle f}by lettingf=f+Iconstraints{\displaystyle f=f+I_{\mathrm {constraints} }}whereI{\displaystyle I}is theindicator function. Then letF:X×Y→[−∞,∞]{\displaystyle F:X\times Y\to [-\infty ,\infty ]}be aperturbation functionsuch thatF(x,0)=f(x).{\displaystyle F(x,0)=f(x).}[5] Thedual problemwith respect to the chosen perturbation function is given by whereF∗{\displaystyle F^{*}}is the convex conjugate in both variables ofF.{\displaystyle F.} Theduality gapis the difference of the right and left hand sides of the inequality[6][5][7] This principle is the same asweak duality. If the two sides are equal to each other, then the problem is said to satisfystrong duality. There are many conditions for strong duality to hold such as: For a convex minimization problem with inequality constraints, the Lagrangian dual problem is where the objective functionL(x,u){\displaystyle L(x,u)}is the Lagrange dual function defined as follows:
https://en.wikipedia.org/wiki/Convex_analysis
TheStyleGenerative Adversarial Network, orStyleGANfor short, is an extension to the GAN architecture introduced byNvidiaresearchers in December 2018,[1]and madesource availablein February 2019.[2][3] StyleGAN depends on Nvidia'sCUDAsoftware, GPUs, andGoogle'sTensorFlow,[4]orMeta AI'sPyTorch, which supersedes TensorFlow as the official implementation library in later StyleGAN versions.[5]The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.[6][7]Nvidia introduced StyleGAN3, described as an "alias-free" version, on June 23, 2021, and made source available on October 12, 2021.[8] A direct predecessor of the StyleGAN series is the Progressive GAN, published in 2017.[9] In December 2018, Nvidia researchers distributed a preprint with accompanying software introducing StyleGAN, a GAN for producing an unlimited number of (often convincing) portraits offake human faces. StyleGAN was able to run on Nvidia's commodity GPU processors. In February 2019,Uberengineer Phillip Wang used the software to create the websiteThis Person Does Not Exist, which displayed a new face on each web page reload.[10][11]Wang himself has expressed amazement, given that humans are evolved to specifically understand human faces, that nevertheless StyleGAN can competitively "pick apart all the relevant features (of human faces) and recompose them in a way that's coherent."[12] In September 2019, a website called Generated Photos published 100,000 images as a collection ofstock photos.[13]The collection was made using a private dataset shot in a controlled environment with similar light and angles.[14] Similarly, two faculty at the University of Washington's Information School used StyleGAN to createWhich Face is Real?, which challenged visitors to differentiate between a fake and a real face side by side.[11]The faculty stated the intention was to "educate the public" about the existence of this technology so they could be wary of it, "just like eventually most people were made aware that you can Photoshop an image".[15] The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.[6][7] In 2021, a third version was released, improving consistency between fine and coarse details in the generator. Dubbed "alias-free", this version was implemented withpytorch.[16] In December 2019,Facebooktook down a network of accounts with false identities, and mentioned that some of them had used profile pictures created with machine learning techniques.[17] Progressive GAN[9]is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator asG=G1∘G2∘⋯∘GN{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}, and the discriminator asD=DN∘DN−1∘⋯∘D1{\displaystyle D=D_{N}\circ D_{N-1}\circ \cdots \circ D_{1}}. During training, at first onlyGN,DN{\displaystyle G_{N},D_{N}}are used in a GAN game to generate 4x4 images. ThenGN−1,DN−1{\displaystyle G_{N-1},D_{N-1}}are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images. To avoid discontinuity between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper[9]). For example, this is how the second stage GAN game starts: StyleGAN is designed as a combination of Progressive GAN withneural style transfer.[18] The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant[note 1]4×4×512{\displaystyle 4\times 4\times 512}array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer usesGramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance). At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector). After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles. Style-mixing between two imagesx,x′{\displaystyle x,x'}can be performed as well. First, run a gradient descent to findz,z′{\displaystyle z,z'}such thatG(z)≈x,G(z′)≈x′{\displaystyle G(z)\approx x,G(z')\approx x'}. This is called "projecting an image back to style latent space". Then,z{\displaystyle z}can be fed to the lower style blocks, andz′{\displaystyle z'}to the higher style blocks, to generate a composite image that has the large-scale style ofx{\displaystyle x}, and the fine-detail style ofx′{\displaystyle x'}. Multiple images can also be composed this way. StyleGAN2 improves upon StyleGAN in two ways. One, it applies the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.[19]The "blob" problem roughly speaking is because using the style latent vector to normalize the generated image destroys useful information. Consequently, the generator learned to create a "distraction" by a large blob, which absorbs most of the effect of normalization (somewhat similar to using flares to distract aheat-seeking missile). Two, it uses residual connections, which helps it avoid the phenomenon where certain features are stuck at intervals of pixels. For example, the seam between two teeth may be stuck at pixels divisible by 32, because the generator learned to generate teeth during stage N-5, and consequently could only generate primitive teeth at that stage, before scaling up 5 times (thus intervals of 32). This was updated by the StyleGAN2-ADA ("ADA" stands for "adaptive"),[20]which usesinvertible data augmentation. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive". StyleGAN3[21]improves upon StyleGAN2 by solving the "texture sticking" problem, which can be seen in the official videos.[22]They analyzed the problem by theNyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon. To solve this, they proposed imposing strictlowpass filtersbetween each generator's layers, so that the generator is forced to operate on the pixels in a wayfaithfulto the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using moresignal filters. The resulting StyleGAN-3 is able to generate images that rotate and translate smoothly, and without texture sticking.
https://en.wikipedia.org/wiki/StyleGAN
Richard J. Boys(6 April 1960 – 5 March 2019) was a statistician best known for his contributions to theBayesian inference,hidden Markov modelsand stochastic systems.[1] Richard attended Newcastle University where he obtained a BSc in mathematics in 1981. He went on to do a Master's and a doctorate at theUniversity of Sheffield, completing it in 1985.[1] In 1986, Boys published his first paper “Screening in a Normal Model” which was co-written with Ian Dunsmore in Series B of theRSS’s journal. He was known for collaborating in his papers.[2] In the same year, he started alectureshipat Newcastle University and would stay at Newcastle for his whole career. In 1996, he became a senior lecturer, and in 2005, he became a Professor of Applied Statistics.[1] Around the end of the 1990s, Richard started to steer towardsstatistics in biologyand was particularly interested in Markov models in segmenting DNA sequences. This led to him researching biological and computational stochastic systems. This widened out to stochastic systems in general, where most of his contributions lay.[1] His most cited paper “Bayesian inference for a stochastic kinetic model” was featured in the scientific journalStatistics in Computingin 2008. The paper outlined how exact Bayesian inference may be possible for the parameters of a general range of biochemical network models, which helped create a new field of research incomputational biology.[1][3] Richard embarked on a long-standing collaboration with mathematicians and archaeologists and another statistician and colleague called Andrew Golightly. They researched inference for population dynamics during theNeolithicperiod, which led toarchaeology, physics and statistics publications.[1] Richard had a liking to visitingAustralia. He first visited the country in 2003 to attend a bioinformatics conference in Brisbane. He was also an Associate Investigator for the ARC Centre of Excellence for Mathematical and Statistical Frontier.[4] He held a Deputy Head position from 2004 – 2009. He was also on the Newcastle University Senate for a term. By the time of his death, he was Head of Pure Mathematics and Statistics.[1]
https://en.wikipedia.org/wiki/Richard_James_Boys
Inmathematics, theKrylov–Bogolyubov theorem(also known as theexistence of invariant measures theorem) may refer to either of the two related fundamentaltheoremswithin the theory ofdynamical systems. The theorems guarantee the existence ofinvariant measuresfor certain "nice" maps defined on "nice" spaces and were named afterRussian-Ukrainianmathematiciansandtheoretical physicistsNikolay KrylovandNikolay Bogolyubovwho proved the theorems.[1] Theorem (Krylov–Bogolyubov). LetXbe acompact,metrizabletopological spaceandF:X→Xacontinuous map. ThenFadmits an invariantBorelprobability measure. That is, if Borel(X) denotes theBorelσ-algebragenerated by the collectionTofopen subsetsofX, then there exists a probability measureμ: Borel(X) → [0, 1] such that for any subsetA∈ Borel(X), In terms of thepush forward, this states that LetXbe aPolish spaceand letPt,t≥0,{\displaystyle P_{t},t\geq 0,}be the transition probabilities for a time-homogeneousMarkovsemigrouponX, i.e. Theorem (Krylov–Bogolyubov). If there exists a pointx∈X{\displaystyle x\in X}for which the family of probability measures {Pt(x, ·) |t> 0 } isuniformly tightand the semigroup (Pt) satisfies theFeller property, then there exists at least one invariant measure for (Pt), i.e. a probability measureμonXsuch that This article incorporates material fromKrylov-Bogolubov theoremonPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Krylov%E2%80%93Bogolyubov_theorem
OpenCL C 3.0 revision V3.0.11[6] C++ for OpenCL 1.0 and 2021[7] OpenCL(Open Computing Language) is aframeworkfor writing programs that execute acrossheterogeneousplatforms consisting ofcentral processing units(CPUs),graphics processing units(GPUs),digital signal processors(DSPs),field-programmable gate arrays(FPGAs) and other processors orhardware accelerators. OpenCL specifies aprogramming language(based onC99) for programming these devices andapplication programming interfaces(APIs) to control the platform and execute programs on thecompute devices. OpenCL provides a standard interface forparallel computingusingtask-anddata-based parallelism. OpenCL is an open standard maintained by theKhronos Group, anon-profit, open standards organisation. Conformant implementations (passed the Conformance Test Suite) are available from a range of companies includingAMD,Arm,Cadence,Google,Imagination,Intel,Nvidia,Qualcomm,Samsung,SPIandVerisilicon.[8][9] OpenCL views a computing system as consisting of a number ofcompute devices, which might becentral processing units(CPUs) or "accelerators" such as graphics processing units (GPUs), attached to ahostprocessor (a CPU). It defines aC-like languagefor writing programs. Functions executed on an OpenCL device are called "kernels".[10]: 17A single compute device typically consists of severalcompute units, which in turn comprise multipleprocessing elements(PEs). A single kernel execution can run on all or many of the PEs in parallel. How a compute device is subdivided into compute units and PEs is up to the vendor; a compute unit can be thought of as a "core", but the notion of core is hard to define across all the types of devices supported by OpenCL (or even within the category of "CPUs"),[11]: 49–50and the number of compute units may not correspond to the number of cores claimed in vendors' marketing literature (which may actually be countingSIMD lanes).[12] In addition to its C-like programming language, OpenCL defines anapplication programming interface(API) that allows programs running on the host to launch kernels on the compute devices and manage device memory, which is (at least conceptually) separate from host memory. Programs in the OpenCL language are intended to becompiled at run-time, so that OpenCL-using applications are portable between implementations for various host devices.[13]The OpenCL standard defines host APIs forCandC++; third-party APIs exist for other programming languages and platforms such asPython,[14]Java,Perl,[15]D[16]and.NET.[11]: 15Animplementationof the OpenCL standard consists of alibrarythat implements the API for C and C++, and an OpenCL Ccompilerfor the compute devices targeted. In order to open the OpenCL programming model to other languages or to protect the kernel source from inspection, theStandard Portable Intermediate Representation(SPIR)[17]can be used as a target-independent way to ship kernels between a front-end compiler and the OpenCL back-end. More recentlyKhronos Grouphas ratifiedSYCL,[18]a higher-level programming model for OpenCL as a single-sourceeDSLbased on pureC++17to improveprogramming productivity. People interested by C++ kernels but not by SYCL single-source programming style can use C++ features with compute kernel sources written in "C++ for OpenCL" language.[19] OpenCL defines a four-levelmemory hierarchyfor the compute device:[13] Not every device needs to implement each level of this hierarchy in hardware.Consistencybetween the various levels in the hierarchy is relaxed, and only enforced by explicitsynchronizationconstructs, notablybarriers. Devices may or may not share memory with the host CPU.[13]The host API provideshandleson device memory buffers and functions to transfer data back and forth between host and devices. The programming language that is used to writecompute kernelsis called kernel language. OpenCL adoptsC/C++-based languages to specify the kernel computations performed on the device with some restrictions and additions to facilitate efficient mapping to the heterogeneous hardware resources of accelerators. Traditionally OpenCL C was used to program the accelerators in OpenCL standard, later C++ for OpenCL kernel language was developed that inherited all functionality from OpenCL C but allowed to use C++ features in the kernel sources. OpenCL C[20]is aC99-based language dialect adapted to fit the device model in OpenCL. Memory buffers reside in specific levels of thememory hierarchy, andpointersare annotated with the region qualifiers__global,__local,__constant, and__private, reflecting this. Instead of a device program having amainfunction, OpenCL C functions are marked__kernelto signal that they areentry pointsinto the program to be called from the host program.Function pointers,bit fieldsandvariable-length arraysare omitted, andrecursionis forbidden.[21]TheC standard libraryis replaced by a custom set of standard functions, geared toward math programming. OpenCL C is extended to facilitate use ofparallelismwith vector types and operations, synchronization, and functions to work with work-items and work-groups.[21]In particular, besides scalar types such asfloatanddouble, which behave similarly to the corresponding types in C, OpenCL provides fixed-length vector types such asfloat4(4-vector of single-precision floats); such vector types are available in lengths two, three, four, eight and sixteen for various base types.[20]: § 6.1.2Vectorizedoperations on these types are intended to map ontoSIMDinstructions sets, e.g.,SSEorVMX, when running OpenCL programs on CPUs.[13]Other specialized types include 2-d and 3-d image types.[20]: 10–11 The following is amatrix–vector multiplicationalgorithm in OpenCL C. The kernel functionmatveccomputes, in each invocation, thedot productof a single row of a matrixAand a vectorx: yi=ai,:⋅x=∑jai,jxj.{\displaystyle y_{i}=a_{i,:}\cdot x=\sum _{j}a_{i,j}x_{j}.} To extend this into a full matrix–vector multiplication, the OpenCL runtimemapsthe kernel over the rows of the matrix. On the host side, theclEnqueueNDRangeKernelfunction does this; it takes as arguments the kernel to execute, its arguments, and a number of work-items, corresponding to the number of rows in the matrixA. This example will load afast Fourier transform(FFT) implementation and execute it. The implementation is shown below.[22]The code asks the OpenCL library for the first available graphics card, creates memory buffers for reading and writing (from the perspective of the graphics card),JIT-compilesthe FFT-kernel and then finally asynchronously runs the kernel. The result from the transform is not read in this example. The actual calculation inside file "fft1D_1024_kernel_src.cl" (based on "Fitting FFT onto the G80 Architecture"):[23] A full, open source implementation of an OpenCL FFT can be found on Apple's website.[24] In 2020, Khronos announced[25]the transition to the community driven C++ for OpenCL programming language[26]that provides features fromC++17in combination with the traditional OpenCL C features. This language allows to leverage a rich variety of language features from standard C++ while preserving backward compatibility to OpenCL C. This opens up a smooth transition path to C++ functionality for the OpenCL kernel code developers as they can continue using familiar programming flow and even tools as well as leverage existing extensions and libraries available for OpenCL C. The language semantics is described in the documentation published in the releases of OpenCL-Docs[27]repository hosted by the Khronos Group but it is currently not ratified by the Khronos Group. The C++ for OpenCL language is not documented in a stand-alone document and it is based on the specification of C++ and OpenCL C. The open sourceClangcompiler has supported C++ for OpenCL since release 9.[28] C++ for OpenCL has been originally developed as a Clang compiler extension and appeared in the release 9.[29]As it was tightly coupled with OpenCL C and did not contain any Clang specific functionality its documentation has been re-hosted to the OpenCL-Docs repository[27]from the Khronos Group along with the sources of other specifications and reference cards. The first official release of this document describing C++ for OpenCL version 1.0 has been published in December 2020.[30]C++ for OpenCL 1.0 contains features from C++17 and it is backward compatible with OpenCL C 2.0. In December 2021, a new provisional C++ for OpenCL version 2021 has been released which is fully compatible with the OpenCL 3.0 standard.[31]A work in progress draft of the latest C++ for OpenCL documentation can be found on the Khronos website.[32] C++ for OpenCL supports most of the features (syntactically and semantically) from OpenCL C except for nested parallelism and blocks.[33]However, there are minor differences in some supported features mainly related to differences in semantics between C++ and C. For example, C++ is more strict with the implicit type conversions and it does not support therestricttype qualifier.[33]The following C++ features are not supported by C++ for OpenCL: virtual functions,dynamic_castoperator, non-placementnew/deleteoperators, exceptions, pointer to member functions, references to functions, C++ standard libraries.[33]C++ for OpenCL extends the concept of separate memory regions (address spaces) from OpenCL C to C++ features – functional casts, templates, class members, references, lambda functions, and operators. Most of C++ features are not available for the kernel functions e.g. overloading or templating, arbitrary class layout in parameter type.[33] The following code snippet illustrates how kernels withcomplex-numberarithmetic can be implemented in C++ for OpenCL language with convenient use of C++ features. C++ for OpenCL language can be used for the same applications or libraries and in the same way as OpenCL C language is used. Due to the rich variety of C++ language features, applications written in C++ for OpenCL can express complex functionality more conveniently than applications written in OpenCL C and in particulargeneric programmingparadigm from C++ is very attractive to the library developers. C++ for OpenCL sources can be compiled by OpenCL drivers that supportcl_ext_cxx_for_openclextension.[34]Armhas announced support for this extension in December 2020.[35]However, due to increasing complexity of the algorithms accelerated on OpenCL devices, it is expected that more applications will compile C++ for OpenCL kernels offline using stand alone compilers such as Clang[36]into executable binary format or portable binary format e.g. SPIR-V.[37]Such an executable can be loaded during the OpenCL applications execution using a dedicated OpenCL API.[38] Binaries compiled from sources in C++ for OpenCL 1.0 can be executed on OpenCL 2.0 conformant devices. Depending on the language features used in such kernel sources it can also be executed on devices supporting earlier OpenCL versions or OpenCL 3.0. Aside from OpenCL drivers kernels written in C++ for OpenCL can be compiled for execution on Vulkan devices using clspv[39]compiler and clvk[40]runtime layer just the same way as OpenCL C kernels. C++ for OpenCL is an open language developed by the community of contributors listed in its documentation.[32]New contributions to the language semantic definition or open source tooling support are accepted from anyone interested as soon as they are aligned with the main design philosophy and they are reviewed and approved by the experienced contributors.[19] OpenCL was initially developed byApple Inc., which holdstrademarkrights, and refined into an initial proposal in collaboration with technical teams atAMD,IBM,Qualcomm,Intel, andNvidia. Apple submitted this initial proposal to theKhronos Group. On June 16, 2008, the Khronos Compute Working Group was formed[41]with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008.[42]This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.[43] OpenCL 1.0 released withMac OS X Snow Leopardon August 28, 2009. According to an Apple press release:[44] Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard. AMD decided to support OpenCL instead of the now deprecatedClose to Metalin itsStream framework.[45][46]RapidMindannounced their adoption of OpenCL underneath their development platform to support GPUs from multiple vendors with one interface.[47]On December 9, 2008, Nvidia announced its intention to add full support for the OpenCL 1.0 specification to its GPU Computing Toolkit.[48]On October 30, 2009, IBM released its first OpenCL implementation as a part of theXL compilers.[49] Acceleration of calculations with factor to 1000 are possible with OpenCL in graphic cards against normal CPU.[citation needed]Some important features of next Version of OpenCL are optional in 1.0 like double- or half-precision operations.[50] OpenCL 1.1 was ratified by the Khronos Group on June 14, 2010,[51]and adds significant functionality for enhanced parallel programming flexibility, functionality, and performance including: On November 15, 2011, the Khronos Group announced the OpenCL 1.2 specification,[52]which added significant functionality over the previous versions in terms of performance and features for parallel programming. Most notable features include: On November 18, 2013, the Khronos Group announced the ratification and public release of the finalized OpenCL 2.0 specification.[54]Updates and additions to OpenCL 2.0 include: The ratification and release of the OpenCL 2.1 provisional specification was announced on March 3, 2015, at the Game Developer Conference in San Francisco. It was released on November 16, 2015.[55]It introduced the OpenCL C++ kernel language, based on a subset ofC++14, while maintaining support for the preexisting OpenCL C kernel language.Vulkanand OpenCL 2.1 shareSPIR-Vas anintermediate representationallowing high-level language front-ends to share a common compilation target. Updates to the OpenCL API include: AMD,ARM, Intel, HPC, and YetiWare have declared support for OpenCL 2.1.[56][57] OpenCL 2.2 brings the OpenCL C++ kernel language into the core specification for significantly enhanced parallel programming productivity.[58][59][60]It was released on May 16, 2017.[61]Maintenance Update released in May 2018 with bugfixes.[62] The OpenCL 3.0 specification was released on September 30, 2020, after being in preview since April 2020. OpenCL 1.2 functionality has become a mandatory baseline, while all OpenCL 2.x and OpenCL 3.0 features were made optional. The specification retains the OpenCL C language and deprecates the OpenCL C++ Kernel Language, replacing it with the C++ for OpenCL language[19]based on aClang/LLVMcompiler which implements a subset ofC++17andSPIR-Vintermediate code.[63][64][65]Version 3.0.7 of C++ for OpenCL with some Khronos openCL extensions were presented at IWOCL 21.[66]Actual is 3.0.11 with some new extensions and corrections. NVIDIA, working closely with the Khronos OpenCL Working Group, improved Vulkan Interop with semaphores and memory sharing.[67]Last minor update was 3.0.14 with bugfix and a new extension for multiple devices.[68] When releasing OpenCL 2.2, the Khronos Group announced that OpenCL would converge where possible withVulkanto enable OpenCL software deployment flexibility over both APIs.[69][70]This has been now demonstrated by Adobe's Premiere Rush using the clspv[39]open source compiler to compile significant amounts of OpenCL C kernel code to run on a Vulkan runtime for deployment on Android.[71]OpenCL has a forward looking roadmap independent of Vulkan, with 'OpenCL Next' under development and targeting release in 2020. OpenCL Next may integrate extensions such as Vulkan / OpenCL Interop, Scratch-Pad Memory Management, Extended Subgroups, SPIR-V 1.4 ingestion and SPIR-V Extended debug info. OpenCL is also considering Vulkan-like loader and layers and a "flexible profile" for deployment flexibility on multiple accelerator types.[72] OpenCL consists of a set of headers and ashared objectthat is loaded at runtime. An installable client driver (ICD) must be installed on the platform for every class of vendor for which the runtime would need to support. That is, for example, in order to support Nvidia devices on a Linux platform, the Nvidia ICD would need to be installed such that the OpenCL runtime (the ICD loader) would be able to locate the ICD for the vendor and redirect the calls appropriately. The standard OpenCL header is used by the consumer application; calls to each function are then proxied by the OpenCL runtime to the appropriate driver using the ICD. Each vendor must implement each OpenCL call in their driver.[73] The Apple,[74]Nvidia,[75]ROCm,RapidMind[76]andGallium3D[77]implementations of OpenCL are all based on theLLVMCompiler technology and use theClangcompiler as their frontend. As of 2016, OpenCL runs ongraphics processing units(GPUs),CPUswithSIMDinstructions,FPGAs,Movidius Myriad 2,Adapteva EpiphanyandDSPs. To be officially conformant, an implementation must pass the Khronos Conformance Test Suite (CTS), with results being submitted to the Khronos Adopters Program.[174]The Khronos CTS code for all OpenCL versions has been available in open source since 2017.[175] TheKhronos Groupmaintains an extended list of OpenCL-conformant products.[4] [183] All standard-conformant implementations can be queried using one of the clinfo tools (there are multiple tools with the same name and similar feature set).[186][187][188] Products and their version of OpenCL support include:[189] All hardware with OpenCL 1.2+ is possible, OpenCL 2.x only optional, Khronos Test Suite available since 2020-10[190][191] None yet: Khronos Test Suite ready, with Driver Update all Hardware with 2.0 and 2.1 support possible A key feature of OpenCL is portability, via its abstracted memory andexecution model, and the programmer is not able to directly use hardware-specific technologies such as inlineParallel Thread Execution(PTX) for Nvidia GPUs unless they are willing to give up direct portability on other platforms. It is possible to run any OpenCL kernel on any conformant implementation. However, performance of the kernel is not necessarily portable across platforms. Existing implementations have been shown to be competitive when kernel code is properly tuned, though, andauto-tuninghas been suggested as a solution to the performance portability problem,[194]yielding "acceptable levels of performance" in experimental linear algebra kernels.[195]Portability of an entire application containing multiple kernels with differing behaviors was also studied, and shows that portability only required limited tradeoffs.[196] A study atDelft Universityfrom 2011 that comparedCUDAprograms and their straightforward translation into OpenCL C found CUDA to outperform OpenCL by at most 30% on the Nvidia implementation. The researchers noted that their comparison could be made fairer by applying manual optimizations to the OpenCL programs, in which case there was "no reason for OpenCL to obtain worse performance than CUDA". The performance differences could mostly be attributed to differences in the programming model (especially the memory model) and to NVIDIA's compiler optimizations for CUDA compared to those for OpenCL.[194] Another study at D-Wave Systems Inc. found that "The OpenCL kernel’s performance is between about 13% and 63% slower, and the end-to-end time is between about 16% and 67% slower" than CUDA's performance.[197] The fact that OpenCL allows workloads to be shared by CPU and GPU, executing the same programs, means that programmers can exploit both by dividing work among the devices.[198]This leads to the problem of deciding how to partition the work, because the relative speeds of operations differ among the devices.Machine learninghas been suggested to solve this problem: Grewe and O'Boyle describe a system ofsupport-vector machinestrained on compile-time features of program that can decide the device partitioning problem statically, without actually running the programs to measure their performance.[199] In a comparison of actual graphic cards of AMD RDNA 2 and Nvidia RTX Series there is an undecided result by OpenCL-Tests. Possible performance increases from the use of Nvidia CUDA or OptiX were not tested.[200]
https://en.wikipedia.org/wiki/OpenCL
Kismetis anetwork detector,packet sniffer, andintrusion detection systemfor802.11wireless LANs. Kismet will work with any wireless card which supportsraw monitoring mode, and can sniff802.11a,802.11b,802.11g, and802.11ntraffic. The program runs underLinux,FreeBSD,NetBSD,OpenBSD, andmacOS. The client can also run onMicrosoft Windows, although, aside from external drones (seebelow), there's only one supported wireless hardware available as packet source. Distributed under theGNU General Public License,[2]Kismet isfree software. Kismet differs from other wireless network detectors in working passively. Namely, without sending any loggable packets, it is able to detect the presence of bothwireless access pointsand wireless clients, and to associate them with each other. It is also the most widely used and up to date open source wireless monitoring tool.[citation needed] Kismet also includes basic wirelessIDSfeatures such as detecting active wireless sniffing programs includingNetStumbler, as well as a number of wireless network attacks. Kismet features the ability to log all sniffed packets and save them in atcpdump/WiresharkorAirsnortcompatible file format. Kismet can also capture "Per-Packet Information" headers. Kismet also features the ability to detect default or "not configured" networks, probe requests, and determine what level of wireless encryption is used on a given access point. In order to find as many networks as possible, Kismet supports channel hopping. This means that it constantly changes from channel to channel non-sequentially, in a user-defined sequence with a default value that leaves big holes between channels (for example, 1-6-11-2-7-12-3-8-13-4-9-14-5-10). The advantage with this method is that it will capture more packets because adjacent channels overlap. Kismet also supports logging of the geographical coordinates of the network if the input from aGPSreceiver is additionally available. Kismet has three separate parts. Adronecan be used to collect packets, and then pass them on to aserverfor interpretation. A server can either be used in conjunction with a drone, or on its own, interpreting packet data, and extrapolating wireless information, and organizing it. Theclientcommunicates with the server and displays the information the server collects. With the updating of Kismet to -ng, Kismet now supports a wide variety of scanning plugins includingDECT, Bluetooth, and others. Kismet is used in a number of commercial and open source projects. It is distributed with Kali Linux.[3]It is used for wireless reconnaissance,[4]and can be used with other packages for an inexpensive wireless intrusion detection system.[5]It has been used in a number of peer reviewed studies such as "Detecting Rogue Access Points using Kismet".[6]
https://en.wikipedia.org/wiki/Kismet_(software)
Cognitive biasesare systematic patterns of deviation from norm and/or rationality in judgment.[1][2]They are often studied inpsychology,sociologyandbehavioral economics.[1] Although the reality of most of these biases is confirmed byreproducibleresearch,[3][4]there are often controversies about how to classify these biases or how to explain them.[5]Severaltheoretical causes are known for some cognitive biases, which provides a classification of biases by their common generative mechanism (such as noisy information-processing[6]).Gerd Gigerenzerhas criticized the framing of cognitive biases as errors in judgment, and favors interpreting them as arising from rational deviations from logical thought.[7] Explanations include information-processing rules (i.e., mental shortcuts), calledheuristics, that the brain uses to producedecisionsor judgments. Biases have a variety of forms and appear as cognitive ("cold") bias, such as mental noise,[6]or motivational ("hot") bias, such as when beliefs are distorted bywishful thinking. Both effects can be present at the same time.[8][9] There are also controversies over some of these biases as to whether they count as useless orirrational, or whether they result in useful attitudes or behavior. For example, when getting to know others, people tend to askleading questionswhich seem biased towards confirming their assumptions about the person. However, this kind ofconfirmation biashas also been argued to be an example ofsocial skill; a way to establish a connection with the other person.[10] Although this research overwhelmingly involves human subjects, some studies have found bias in non-human animals as well. For example,loss aversionhas been shown in monkeys andhyperbolic discountinghas been observed in rats, pigeons, and monkeys.[11] These biases affect belief formation, reasoning processes, business and economic decisions, and human behavior in general. The anchoring bias, or focalism, is the tendency to rely too heavily—to "anchor"—on one trait or piece of information when making decisions (usually the first piece of information acquired on that subject).[12][13]Anchoring bias includes or involves the following: The tendency to perceive meaningful connections between unrelated things.[18]The following are types of apophenia: The availability heuristic (also known as the availability bias) is the tendency to overestimate the likelihood of events with greater "availability" in memory, which can be influenced by how recent the memories are or how unusual or emotionally charged they may be.[22]The availability heuristic includes or involves the following: Cognitive dissonance is the perception of contradictory information and the mental toll of it. Confirmation bias is the tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions.[35]There are multiple other cognitive biases which involve or are types of confirmation bias: Egocentric bias is the tendency to rely too heavily on one's own perspective and/or have a different perception of oneself relative to others.[38]The following are forms of egocentric bias: Extension neglect occurs where the quantity of the sample size is not sufficiently taken into consideration when assessing the outcome, relevance or judgement. The following are forms of extension neglect: False priors are initial beliefs and knowledge which interfere with the unbiased evaluation of factual evidence and lead to incorrect conclusions. Biases based on false priors include: The framing effect is the tendency to draw different conclusions from the same information, depending on how that information is presented. Forms of the framing effect include: The following relate to prospect theory: Association fallacies include: Attribution bias includes: Conformity is involved in the following: Ingroup bias is the tendency for people to give preferential treatment to others they perceive to be members of their own groups. It is related to the following: Inpsychologyandcognitive science, a memory bias is acognitive biasthat either enhances or impairs the recall of amemory(either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including: The misattributions include:
https://en.wikipedia.org/wiki/List_of_memory_biases
This is alist ofEnglish determiners. All cardinal numerals are also included.[1]: 385 Any genitive noun phrase such asthe cat's,the cats',Geoff's, etc.
https://en.wikipedia.org/wiki/List_of_English_determiners
Blockmodelingis a set or a coherentframework, that is used for analyzingsocial structureand also for setting procedure(s) for partitioning (clustering)social network's units (nodes,vertices,actors), based on specific patterns, which form a distinctive structure through interconnectivity.[1][2]It is primarily used instatistics,machine learningandnetwork science. As anempirical procedure, blockmodeling assumes that all the units in a specific network can be grouped together to such extent to which they are equivalent. Regarding equivalency, it can be structural, regular or generalized.[3]Using blockmodeling, anetworkcan be analyzed using newly createdblockmodels, which transforms large and complex network into a smaller and more comprehensible one. At the same time, the blockmodeling is used to operationalizesocial roles. While some contend that the blockmodeling is just clustering methods,BonacichandMcConaghystate that "it is a theoretically grounded and algebraic approach to the analysis of the structure of relations". Blockmodeling's unique ability lies in the fact that it considers the structure not just as a set of direct relations, but also takes into account all other possible compound relations that are based on the direct ones.[4] The principles of blockmodeling were first introduced byFrancois LorrainandHarrison C. Whitein 1971.[2]Blockmodeling is considered as "an important set of network analytic tools" as it deals with delineation of role structures (the well-defined places in social structures, also known as positions) and the discerning the fundamental structure of social networks.[5]: 2, 3According toBatagelj, the primary "goal of blockmodeling is to reduce a large, potentially incoherent network to a smaller comprehensible structure that can be interpreted more readily".[6]Blockmodeling was at first used for analysis insociometryandpsychometrics, but has now spread also to other sciences.[7] A network as a system is composed of (or defined by) two different sets: one set of units (nodes, vertices, actors) and one set of links between the units. Using both sets, it is possible to create agraph, describing the structure of the network.[8] During blockmodeling, the researcher is faced with two problems: how to partition the units (e.g., how to determine theclusters(or classes), that then form vertices in a blockmodel) and then how to determine the links in the blockmodel (and at the same time the values of these links).[9] In thesocial sciences, the networks are usuallysocial networks, composed of several individuals (units) and selectedsocial relationshipsamong them (links). Real-world networks can be large and complex; blockmodeling is used to simplify them into smaller structures that can be easier to interpret. Specifically, blockmodeling partitions the units into clusters and then determines the ties among the clusters. At the same time, blockmodeling can be used to explain thesocial rolesexisting in the network, as it is assumed that the created cluster of units mimics (or is closely associated with) the units' social roles.[8] Blockmodeling can thus be defined as a set of approaches for partitioning units into clusters (also known as positions) and links into blocks, which are further defined by the newly obtained clusters. A block (also blockmodel) is defined as a submatrix, that shows interconnectivity (links) between nodes, present in the same or different clusters.[8]Each of these positions in the cluster is defined by a set of (in)direct ties to and from other social positions.[10]These links (connections) can be directed or undirected; there can be multiple links between the same pair of objects or they can have weights on them. If there are not any multiple links in a network, it is called a simple network.[11]: 8 Amatrixrepresentation of a graph is composed of ordered units, in rows and columns, based on their names. The ordered units with similar patterns of links are partitioned together in the same clusters. Clusters are then arranged together so that units from the same clusters are placed next to each other, thus preserving interconnectivity. In the next step, the units (from the same clusters) are transformed into a blockmodel. With this, several blockmodels are usually formed, one being core cluster and others being cohesive; a core cluster is always connected to cohesive ones, while cohesive ones cannot be linked together. Clustering of nodes is based on theequivalence, such as structural and regular.[8]The primary objective of the matrix form is to visually present relations between the persons included in the cluster. These ties are coded dichotomously (as present or absent), and the rows in the matrix form indicate the source of the ties, while the columns represent the destination of the ties.[10] Equivalence can have two basic approaches: the equivalent units have the same connection pattern to the same neighbors or these units have same or similar connection pattern to different neighbors. If the units are connected to the rest of network in identical ways, then they are structurally equivalent.[3]Units can also be regularly equivalent, when they are equivalently connected to equivalent others.[2] With blockmodeling, it is necessary to consider the issue of results being affected by measurement errors in the initial stage of acquiring the data.[12] Regarding what kind of network is undergoing blockmodeling, a different approach is necessary. Networks can be one–mode or two–mode. In the former all units can be connected to any other unit and where units are of the same type, while in the latter the units are connected only to the unit(s) of a different type.[5]: 6–10Regarding relationships between units, they can be single–relational or multi–relational networks. Further more, the networks can be temporal or multilevel and also binary (only 0 and 1) or signed (allowing negative ties)/values (other values are possible) networks. Different approaches to blockmodeling can be grouped into two main classes:deterministic blockmodelingandstochastic blockmodelingapproaches. Deterministic blockmodeling is then further divided into direct and indirect blockmodeling approaches.[8] Among direct blockmodeling approaches are:structural equivalenceandregular equivalence.[2]Structural equivalence is a state, when units are connected to the rest of the network in an identical way(s), while regular equivalence occurs when units are equally related to equivalent others (units are not necessarily sharing neighbors, but have neighbour that are themselves similar).[3][5]: 24 Indirect blockmodeling approaches, where partitioning is dealt with as a traditional cluster analysis problem (measuring (dis)similarityresults in a (dis)similarity matrix), are:[8][2] According to Brusco and Steinley (2011),[14]the blockmodeling can be categorized (using a number of dimensions):[15] Blockmodels(sometimes alsoblock models) are structures in which: Computer programs can partition the social network according to pre-set conditions.[17]: 333When empirical blocks can be reasonably approximated in terms of ideal blocks, such blockmodels can be reduced to ablockimage, which is a representation of the original network, capturing its underlying 'functional anatomy'.[18]Thus, blockmodels can "permit the data to characterize their own structure", and at the same time not seek to manifest a preconceived structure imposed by the researcher.[19] Blockmodels can be created indirectly or directly, based on the construction of thecriterion function. Indirect construction refers to a function, based on "compatible (dis)similarity measure between paris of units", while the direct construction is "a function measuring the fit of real blocks induced by a givenclusteringto the corresponding ideal blocks with perfect relations within each cluster and between clusters according to the considered types of connections (equivalence)".[20] Blockmodels can be specified regarding theintuition,substanceor the insight into the nature of the studied network; this can result in such models as follows:[5]: 16–24 Blockmodeling is done with specializedcomputer programs, dedicated to the analysis of networks or blockmodeling in particular, as:
https://en.wikipedia.org/wiki/Blockmodeling
Incomputing, acrash, orsystem crash, occurs when a computer program such as asoftware applicationor anoperating systemstops functioning properly andexits. On some operating systems or individual applications, acrash reporting servicewill report the crash and any details relating to it (or give the user the option to do so), usually to thedeveloper(s)of the application. If the program is a critical part of the operating system, the entire system may crash or hang, often resulting in akernel panicorfatal system error. Most crashes are the result of asoftware bug. Typical causes include accessing invalid memory addresses,[a]incorrect address values in theprogram counter,buffer overflow, overwriting a portion of the affected program code due to an earlier bug, executing invalidmachine instructions(anillegalorunauthorizedopcode), or triggering an unhandledexception. The original software bug that started this chain of events is typically considered to be the cause of the crash, which is discovered through the process ofdebugging. The original bug can be far removed from thecodethat actually triggered the crash. In early personal computers, attempting to write data to hardware addresses outside the system's main memory could cause hardware damage. Some crashes areexploitableand let a malicious program orhackerexecutearbitrary code, allowing the replication ofvirusesor the acquisition of data which would normally be inaccessible. Anapplicationtypically crashes when it performs an operation that is not allowed by the operating system. The operating system then triggers anexceptionorsignalin the application. Unix applications traditionally responded to the signal bydumping core. Most Windows and UnixGUIapplications respond by displaying a dialogue box (such as the one shown in the accompanying image on the right) with the option to attach adebuggerif one is installed. Some applications attempt to recover from the error and continue running instead ofexiting. An application can also containcodeto crash[b]after detecting a severe error. Typical errors that result in application crashes include: A "crash to desktop" (CTD) is said to occur when aprogram(commonly avideo game) unexpectedly quits, abruptly taking the user back to thedesktop. Usually, the term is applied only to crashes where no error is displayed, hence all the user sees as a result of the crash is the desktop. Many times there is no apparent action that causes a crash to desktop. During normal function, the program mayfreezefor a shorter period of time, and then close by itself. Also during normal function, the program may become ablack screenand repeatedly play the last few seconds ofsound(depending on the size of the audiobuffer) that was being played before it crashes to desktop. Other times it may appear to betriggeredby a certain action, such as loading an area. CTD bugs are considered particularly problematic for users. Since they frequently display no error message, it can be very difficult to track down the source of the problem, especially if the times they occur and the actions taking place right before the crash do not appear to have any pattern or common ground. One way to track down the source of the problem for games is to run them in windowed-mode. Certain operating system versions may feature one or more tools to help track down causes of CTD problems. Some computer programs such asStepManiaand BBC'sBamzookialso crash to desktop if in full-screen, but display the error in a separate window when the user has returned to the desktop. The software running theweb serverbehind a website may crash, rendering it inaccessible entirely or providing only an error message instead of normal content. For example, if a site is using an SQL database (such asMySQL) for a script (such asPHP) and that SQL database server crashes, thenPHPwill display a connection error. An operating system crash commonly occurs when ahardware exceptionoccurs that cannot behandled. Operating system crashes can also occur when internalsanity-checkinglogic within the operating system detects that the operating system has lost its internal self-consistency. Modern multi-tasking operating systems, such asLinux, andmacOS, usually remain unharmed when an application program crashes. Some operating systems, e.g.,z/OS, have facilities forReliability, availability and serviceability(RAS) and the OS can recover from the crash of a critical component, whether due to hardware failure, e.g., uncorrectable ECC error, or to software failure, e.g., a reference to an unassigned page. An Abnormal end or ABEND is an abnormal termination ofsoftware, or a program crash. Errors or crashes on theNovellNetWare network operating system are usually called ABENDs. Communities ofNetWareadministrators sprang up around the Internet, such asabend.org. This usage derives from theABENDmacro on IBMOS/360, ...,z/OSoperating systems. Usually capitalized, but may appear as "abend". Some common ABEND codes are System ABEND 0C7 (data exception) and System ABEND 0CB (division by zero).[1][2][3]Abends can be "soft" (allowing automatic recovery) or "hard" (terminating the activity).[4]The term is jocularly claimed to be derived from the German word "Abend" meaning "evening".[5] Depending on the application, the crash may contain the user's sensitive andprivate information.[6]Moreover, many software bugs which cause crashes are alsoexploitableforarbitrary code executionand other types ofprivilege escalation.[7][8]For example, astack buffer overflowcan overwrite the return address of a subroutine with an invalid value, which will cause, e.g., asegmentation fault, when the subroutine returns. However, if an exploit overwrites the return address with a valid value, the code in that address will be executed. When crashes are collected in the field using acrash reporter, the next step for developers is to be able to reproduce them locally. For this, several techniques exist: STAR uses symbolic execution,[9]EvoCrash performs evolutionary search.[10]
https://en.wikipedia.org/wiki/Crash_(computing)
Inmathematics, adiffeologyon a set generalizes the concept of a smooth atlas of adifferentiable manifold, by declaring only what constitutes the "smooth parametrizations" into the set. A diffeological space is a set equipped with a diffeology. Many of the standard tools ofdifferential geometryextend to diffeological spaces, which beyond manifolds include arbitrary quotients of manifolds, arbitrary subsets of manifolds, and spaces of mappings between manifolds. Thedifferential calculusonRn{\displaystyle \mathbb {R} ^{n}}, or, more generally, on finite dimensionalvector spaces, is one of the most impactful successes of modern mathematics. Fundamental to its basic definitions and theorems is the linear structure of the underlying space.[1][2] The field ofdifferential geometryestablishes and studies the extension of the classical differential calculus to non-linear spaces. This extension is made possible by the definition of asmooth manifold, which is also the starting point for diffeological spaces. A smoothn{\displaystyle n}-dimensional manifold is a setM{\displaystyle M}equipped with a maximalsmooth atlas, which consists of injective functions, calledcharts, of the formϕ:U→M{\displaystyle \phi :U\to M}, whereU{\displaystyle U}is an open subset ofRn{\displaystyle \mathbb {R} ^{n}}, satisfying some mutual-compatibility relations. The charts of a manifold perform two distinct functions, which are often syncretized:[3][4][5] A diffeology generalizes the structure of a smooth manifold by abandoning the first requirement for an atlas, namely that the charts give a local model of the space, while retaining the ability to discuss smooth maps into the space.[6][7][8] Adiffeological spaceis a setX{\displaystyle X}equipped with adiffeology: a collection of maps{p:U→X∣Uis an open subset ofRn,andn≥0},{\displaystyle \{p:U\to X\mid U{\text{ is an open subset of }}\mathbb {R} ^{n},{\text{ and }}n\geq 0\},}whose members are calledplots, that satisfies some axioms. The plots are not required to be injective, and can (indeed, must) have as domains the open subsets of arbitrary Euclidean spaces. A smooth manifold can be viewed as a diffeological space which is locally diffeomorphic toRn{\displaystyle \mathbb {R} ^{n}}. In general, while not giving local models for the space, the axioms of a diffeology still ensure that the plots induce a coherent notion of smooth functions, smooth curves, smooth homotopies, etc. Diffeology is therefore suitable to treat objects more general than manifolds.[6][7][8] LetM{\displaystyle M}andN{\displaystyle N}be smooth manifolds. A smooth homotopy of mapsM→N{\displaystyle M\to N}is a smooth mapH:R×M→N{\displaystyle H:\mathbb {R} \times M\to N}. For eacht∈R{\displaystyle t\in \mathbb {R} }, the mapHt:=H(t,⋅):M→N{\displaystyle H_{t}:=H(t,\cdot ):M\to N}is smooth, and the intuition behind a smooth homotopy is that it is a smooth curve into the space of smooth functionsC∞(M,N){\displaystyle {\mathcal {C}}^{\infty }(M,N)}connecting, say,H0{\displaystyle H_{0}}andH1{\displaystyle H_{1}}. ButC∞(M,N){\displaystyle {\mathcal {C}}^{\infty }(M,N)}is not a finite-dimensional smooth manifold, so formally we cannot yet speak of smooth curves into it. On the other hand, the collection of maps{p:U→C∞(M,N)∣the mapU×M→N,(r,x)↦p(r)(x)is smooth}{\displaystyle \{p:U\to {\mathcal {C}}^{\infty }(M,N)\mid {\text{ the map }}U\times M\to N,\ (r,x)\mapsto p(r)(x){\text{ is smooth}}\}}is a diffeology onC∞(M,N){\displaystyle {\mathcal {C}}^{\infty }(M,N)}. With this structure, the smooth curves (a notion which is now rigorously defined) correspond precisely to the smooth homotopies.[6][7][8] The concept of diffeology was first introduced byJean-Marie Souriauin the 1980s under the nameespace différentiel.[9][10]Souriau's motivating application for diffeology was to uniformly handle the infinite-dimensional groups arising from his work ingeometric quantization. Thus the notion of diffeological group preceded the more general concept of a diffeological space. Souriau's diffeological program was taken up by his students, particularlyPaul Donato[11]andPatrick Iglesias-Zemmour,[12]who completed early pioneering work in the field. A structure similar to diffeology was introduced byKuo-Tsaï Chen(陳國才,Chen Guocai) in the 1970s, in order to formalize certain computations with path integrals. Chen's definition usedconvex setsinstead of open sets for the domains of the plots.[13]The similarity between diffeological and "Chen" structures can be made precise by viewing both as concrete sheaves over the appropriate concrete site.[14] Adiffeologyon a setX{\displaystyle X}consists of a collection of maps, calledplotsor parametrizations, fromopen subsetsofRn{\displaystyle \mathbb {R} ^{n}}(for alln≥0{\displaystyle n\geq 0}) toX{\displaystyle X}such that the following axioms hold: Note that the domains of different plots can be subsets ofRn{\displaystyle \mathbb {R} ^{n}}for different values ofn{\displaystyle n}; in particular, any diffeology contains the elements of its underlying set as the plots withn=0{\displaystyle n=0}. A set together with a diffeology is called adiffeological space. More abstractly, a diffeological space is a concretesheafon thesiteof open subsets ofRn{\displaystyle \mathbb {R} ^{n}}, for alln≥0{\displaystyle n\geq 0}, andopen covers.[14] A map between diffeological spaces is calledsmoothif and only if its composite with any plot of the first space is a plot of the second space. It is called adiffeomorphismif it is smooth,bijective, and itsinverseis also smooth. Equipping the open subsets of Euclidean spaces with their standard diffeology (as defined in the next section), the plots into a diffeological spaceX{\displaystyle X}are precisely the smooth maps fromU{\displaystyle U}toX{\displaystyle X}. Diffeological spaces constitute the objects of acategory, denoted byDflg{\displaystyle {\mathsf {Dflg}}}, whosemorphismsare smooth maps. The categoryDflg{\displaystyle {\mathsf {Dflg}}}is closed under many categorical operations: for instance, it isCartesian closed,completeandcocomplete, and more generally it is aquasitopos.[14] Any diffeological space is atopological spacewhen equipped with theD-topology:[12]thefinal topologysuch that all plots arecontinuous(with respect to theEuclidean topologyonRn{\displaystyle \mathbb {R} ^{n}}). In other words, a subsetU⊂X{\displaystyle U\subset X}is open if and only ifp−1(U){\displaystyle p^{-1}(U)}is open for any plotp{\displaystyle p}onX{\displaystyle X}. Actually, the D-topology is completely determined by smoothcurves, i.e. a subsetU⊂X{\displaystyle U\subset X}is open if and only ifc−1(U){\displaystyle c^{-1}(U)}is open for any smooth mapc:R→X{\displaystyle c:\mathbb {R} \to X}.[15]The D-topology is automaticallylocally path-connected[16] A smooth map between diffeological spaces is automaticallycontinuousbetween their D-topologies.[6]Therefore we have the functorD:Dflg→Top{\displaystyle D:{\mathsf {Dflg}}\to {\mathsf {Top}}}, from the category of diffeological spaces to the category of topological spaces, which assigns to a diffeological space its D-topology. This functor realizesDflg{\displaystyle {\mathsf {Dflg}}}as aconcrete categoryoverTop{\displaystyle {\mathsf {Top}}}. A Cartan-De Rham calculus can be developed in the framework of diffeologies, as well as a suitable adaptation of the notions offiber bundles,homotopy, etc.[6]However, there is not a canonical definition oftangent spacesandtangent bundlesfor diffeological spaces.[17] Any set carries at least two diffeologies: Any topological space can be endowed with thecontinuousdiffeology, whose plots are thecontinuousmaps. The Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}admits several diffeologies beyond those listed above. Diffeological spaces generalize manifolds, but they are far from the only mathematical objects to do so. For instance manifolds with corners, orbifolds, and infinite-dimensional Fréchet manifolds are all well-established alternatives. This subsection makes precise the extent to which these spaces are diffeological. We viewDflg{\displaystyle {\mathsf {Dflg}}}as a concrete category over the category of topological spacesTop{\displaystyle {\mathsf {Top}}}via the D-topology functorD:Dflg→Top{\displaystyle D:{\mathsf {Dflg}}\to {\mathsf {Top}}}. IfU:C→Top{\displaystyle U:{\mathsf {C}}\to {\mathsf {Top}}}is another concrete category overTop{\displaystyle {\mathsf {Top}}}, we say that a functorE:C→Dflg{\displaystyle E:{\mathsf {C}}\to {\mathsf {Dflg}}}is an embedding (of concrete categories) if it is injective on objects and faithful, andD∘E=U{\displaystyle D\circ E=U}. To specify an embedding, we need only describe it on objects; it is necessarily the identity map on arrows. We will say that a diffeological spaceX{\displaystyle X}islocally modeledby a collection of diffeological spacesE{\displaystyle {\mathcal {E}}}if around every pointx∈X{\displaystyle x\in X}, there is a D-open neighbourhoodU{\displaystyle U}, a D-open subsetV{\displaystyle V}of someE∈E{\displaystyle E\in {\mathcal {E}}}, and a diffeological diffeomorphismU→V{\displaystyle U\to V}.[6][19] The category of finite-dimensional smooth manifolds (allowing those with connected components of different dimensions) fully embeds intoDflg{\displaystyle {\mathsf {Dflg}}}. The embeddingy{\displaystyle y}assigns to a smooth manifoldM{\displaystyle M}the canonical diffeology{p:U→M∣pis smooth in the usual sense}.{\displaystyle \{p:U\to M\mid p{\text{ is smooth in the usual sense}}\}.}In particular, a diffeologically smooth map between manifolds is smooth in the usual sense, and the D-topology ofy(M){\displaystyle y(M)}is the original topology ofM{\displaystyle M}. Theessential imageof this embedding consists of those diffeological spaces that are locally modeled by the collection{y(Rn)}{\displaystyle \{y(\mathbb {R} ^{n})\}}, and whose D-topology isHausdorffandsecond-countable.[6] The category of finite-dimensional smoothmanifolds with boundary(allowing those with connected components of different dimensions) similarly fully embeds intoDflg{\displaystyle {\mathsf {Dflg}}}. The embedding is defined identically to the smooth case, except "smooth in the usual sense" refers to the standard definition of smooth maps between manifolds with boundary. The essential image of this embedding consists of those diffeological spaces that are locally modeled by the collection{y(O)∣Ois a half-space}{\displaystyle \{y(O)\mid O{\text{ is a half-space}}\}}, and whose D-topology is Hausdorff and second-countable. The same can be done in more generality formanifolds with corners, using the collection{y(O)∣Ois an orthant}{\displaystyle \{y(O)\mid O{\text{ is an orthant}}\}}.[20] The category ofFréchet manifoldssimilarly fully embeds intoDflg{\displaystyle {\mathsf {Dflg}}}. Once again, the embedding is defined identically to the smooth case, except "smooth in the usual sense" refers to the standard definition of smooth maps between Fréchet spaces. The essential image of this embedding consists of those diffeological spaces that are locally modeled by the collection{y(E)∣Eis a Fréchet space}{\displaystyle \{y(E)\mid E{\text{ is a Fréchet space}}\}}, and whose D-topology is Hausdorff. The embedding restricts to one of the category ofBanach manifolds. Historically, the case of Banach manifolds was proved first, by Hain,[21]and the case of Fréchet manifolds was treated later, by Losik.[22][23]The category of manifolds modeled onconvenient vector spacesalso similarly embeds intoDflg{\displaystyle {\mathsf {Dflg}}}.[24][25] A (classical)orbifoldX{\displaystyle X}is a space that is locally modeled by quotients of the formRn/Γ{\displaystyle \mathbb {R} ^{n}/\Gamma }, whereΓ{\displaystyle \Gamma }is afinite subgroupof linear transformations. On the other hand, each modelRn/Γ{\displaystyle \mathbb {R} ^{n}/\Gamma }is naturally a diffeological space (with the quotient diffeology discussed below), and therefore the orbifold charts generate a diffeology onX{\displaystyle X}. This diffeology is uniquely determined by the orbifold structure ofX{\displaystyle X}. Conversely, a diffeological space that is locally modeled by the collection{Rn/Γ}{\displaystyle \{\mathbb {R} ^{n}/\Gamma \}}(and with Hausdorff D-topology) carries a classical orbifold structure that induces the original diffeology, wherein the local diffeomorphisms are the orbifold charts. Such a space is called a diffeological orbifold.[26] Whereas diffeological orbifolds automatically have a notion of smooth map between them (namely diffeologically smooth maps inDflg{\displaystyle {\mathsf {Dflg}}}), the notion of a smooth map between classical orbifolds is not standardized. If orbifolds are viewed asdifferentiable stackspresented by étale properLie groupoids, then there is a functor from the underlying 1-category of orbifolds, and equivalent maps-of-stacks between them, toDflg{\displaystyle {\mathsf {Dflg}}}. Its essential image consists of diffeological orbifolds, but the functor is neither faithful nor full.[27] If a setX{\displaystyle X}is given two different diffeologies, theirintersectionis a diffeology onX{\displaystyle X}, called theintersection diffeology, which is finer than both starting diffeologies. The D-topology of the intersection diffeology is finer than the intersection of the D-topologies of the original diffeologies. IfX{\displaystyle X}andY{\displaystyle Y}are diffeological spaces, then theproductdiffeology on theCartesian productX×Y{\displaystyle X\times Y}is the diffeology generated by all products of plots ofX{\displaystyle X}and ofY{\displaystyle Y}. Precisely, a mapp:U→X×Y{\displaystyle p:U\to X\times Y}necessarily has the formp(u)=(x(u),y(u)){\displaystyle p(u)=(x(u),y(u))}for mapsx:U→X{\displaystyle x:U\to X}andy:U→Y{\displaystyle y:U\to Y}. The mapp{\displaystyle p}is a plot in the product diffeology if and only ifx{\displaystyle x}andy{\displaystyle y}are plots ofX{\displaystyle X}andY{\displaystyle Y}, respectively. This generalizes to products of arbitrary collections of spaces. The D-topology ofX×Y{\displaystyle X\times Y}is the coarsest delta-generated topology containing theproduct topologyof the D-topologies ofX{\displaystyle X}andY{\displaystyle Y}; it is equal to the product topology whenX{\displaystyle X}orY{\displaystyle Y}islocally compact, but may be finer in general.[15] Given a mapf:X→Y{\displaystyle f:X\to Y}from a setX{\displaystyle X}to a diffeological spaceY{\displaystyle Y}, thepullbackdiffeology onX{\displaystyle X}consists of those mapsp:U→X{\displaystyle p:U\to X}such that the compositionf∘p{\displaystyle f\circ p}is a plot ofY{\displaystyle Y}. In other words, the pullback diffeology is the smallest diffeology onX{\displaystyle X}makingf{\displaystyle f}smooth. IfX{\displaystyle X}is asubsetof the diffeological spaceY{\displaystyle Y}, then thesubspacediffeology onX{\displaystyle X}is the pullback diffeology induced by the inclusionX↪Y{\displaystyle X\hookrightarrow Y}. In this case, the D-topology ofX{\displaystyle X}is equal to thesubspace topologyof the D-topology ofY{\displaystyle Y}ifY{\displaystyle Y}is open, but may be finer in general. Given a mapf:X→Y{\displaystyle f:X\to Y}from diffeological spaceX{\displaystyle X}to a setY{\displaystyle Y}, thepushforwarddiffeology onY{\displaystyle Y}is the diffeology generated by the compositionsf∘p{\displaystyle f\circ p}, for plotsp:U→X{\displaystyle p:U\to X}ofX{\displaystyle X}. In other words, the pushforward diffeology is the smallest diffeology onY{\displaystyle Y}makingf{\displaystyle f}smooth. IfX{\displaystyle X}is a diffeological space and∼{\displaystyle \sim }is anequivalence relationonX{\displaystyle X}, then thequotientdiffeology on thequotient setX/∼{\displaystyle X/{\sim }}is the pushforward diffeology induced by the quotient mapX→X/∼{\displaystyle X\to X/{\sim }}. The D-topology onX/∼{\displaystyle X/{\sim }}is thequotient topologyof the D-topology ofX{\displaystyle X}. Note that this topology may be trivial without the diffeology being trivial. Quotients often give rise to non-manifold diffeologies. For example, the set ofreal numbersR{\displaystyle \mathbb {R} }is a smooth manifold. The quotientR/(Z+αZ){\displaystyle \mathbb {R} /(\mathbb {Z} +\alpha \mathbb {Z} )}, for someirrationalα{\displaystyle \alpha }, called theirrational torus, is a diffeological space diffeomorphic to the quotient of the regular2-torusR2/Z2{\displaystyle \mathbb {R} ^{2}/\mathbb {Z} ^{2}}by a line ofslopeα{\displaystyle \alpha }. It has a non-trivial diffeology, although its D-topology is thetrivial topology.[28] Thefunctionaldiffeology on the setC∞(X,Y){\displaystyle {\mathcal {C}}^{\infty }(X,Y)}of smooth maps between two diffeological spacesX{\displaystyle X}andY{\displaystyle Y}is the diffeology whose plots are the mapsϕ:U→C∞(X,Y){\displaystyle \phi :U\to {\mathcal {C}}^{\infty }(X,Y)}such thatU×X→Y,(u,x)↦ϕ(u)(x){\displaystyle U\times X\to Y,\quad (u,x)\mapsto \phi (u)(x)}is smooth with respect to the product diffeology ofU×X{\displaystyle U\times X}. WhenX{\displaystyle X}andY{\displaystyle Y}are manifolds, the D-topology ofC∞(X,Y){\displaystyle {\mathcal {C}}^{\infty }(X,Y)}is the smallestlocally path-connectedtopology containing theWhitneyC∞{\displaystyle C^{\infty }}topology.[15] Taking the subspace diffeology of a functional diffeology, one can define diffeologies on the space ofsectionsof afibre bundle, or the space of bisections of aLie groupoid, etc. IfM{\displaystyle M}is a compact smooth manifold, andF→M{\displaystyle F\to M}is a smooth fiber bundle overM{\displaystyle M}, then the space of smooth sectionsΓ(F){\displaystyle \Gamma (F)}of the bundle is frequently equipped with the structure of a Fréchet manifold.[29]Upon embedding this Fréchet manifold into the category of diffeological spaces, the resulting diffeology coincides with the subspace diffeology thatΓ(F){\displaystyle \Gamma (F)}inherits from the functional diffeology onC∞(M,F){\displaystyle {\mathcal {C}}^{\infty }(M,F)}.[30] Analogous to the notions ofsubmersionsandimmersionsbetween manifolds, there are two special classes of morphisms between diffeological spaces. Asubductionis a surjective functionf:X→Y{\displaystyle f:X\to Y}between diffeological spaces such that the diffeology ofY{\displaystyle Y}is the pushforward of the diffeology ofX{\displaystyle X}. Similarly, aninductionis an injective functionf:X→Y{\displaystyle f:X\to Y}between diffeological spaces such that the diffeology ofX{\displaystyle X}is the pullback of the diffeology ofY{\displaystyle Y}. Subductions and inductions are automatically smooth. It is instructive to consider the case whereX{\displaystyle X}andY{\displaystyle Y}are smooth manifolds. f:(−π2,3π2)→R2,f(t):=(2cos⁡(t),sin⁡(2t)).{\displaystyle f:\left(-{\frac {\pi }{2}},{\frac {3\pi }{2}}\right)\to \mathbb {R^{2}} ,\quad f(t):=(2\cos(t),\sin(2t)).} f:R→R2,f(t):=(t2,t3).{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{2},\quad f(t):=(t^{2},t^{3}).} In the category of diffeological spaces, subductions are precisely the strongepimorphisms, and inductions are precisely the strongmonomorphisms.[18]A map that is both a subduction and induction is a diffeomorphism.
https://en.wikipedia.org/wiki/Diffeology
Incomputer science, thereaders–writers problemsare examples of a common computing problem inconcurrency.[1]There are at least three variations of the problems, which deal with situations in which many concurrentthreadsof execution try to access the same shared resource at one time. Some threads may read and some may write, with the constraint that no thread may access the shared resource for either reading or writing while another thread is in the act of writing to it. (In particular, we want to prevent more than one thread modifying the shared resource simultaneously and allow for two or more readers to access the shared resource at the same time). Areaders–writer lockis adata structurethat solves one or more of the readers–writers problems. The basic reader–writers problem was first formulated and solved by Courtoiset al.[2][3] Suppose we have a shared memory area (critical section) with the basic constraints detailed above. It is possible to protect the shared data behind a mutual exclusionmutex, in which case no two threads can access the data at the same time. However, this solution is sub-optimal, because it is possible that a readerR1might have the lock, and then another readerR2requests access. It would be foolish forR2to wait untilR1was done before starting its own read operation; instead,R2should be allowed to read the resource alongsideR1because reads don't modify data, so concurrent reads are safe. This is the motivation for thefirst readers–writers problem, in which the constraint is added thatno reader shall be kept waiting if the share is currently opened for reading.This is also calledreaders-preference, with its solution: In this solution of the readers/writers problem, the first reader must lock the resource (shared file) if such is available. Once the file is locked from writers, it may be used by many subsequent readers without having them to re-lock it again. Before entering thecritical section, every new reader must go through the entry section. However, there may only be a single reader in the entry section at a time. This is done to avoidrace conditionson the readers (in this context, a race condition is a condition in which two or more threads are waking up simultaneously and trying to enter the critical section; without further constraint, the behavior is nondeterministic. E.g. two readers increment thereadcountat the same time, and both try to lock the resource, causing one reader to block). To accomplish this, every reader which enters the <ENTRY Section> will lock the <ENTRY Section> for themselves until they are done with it. At this point the readers are not locking the resource. They are only locking the entry section so no other reader can enter it while they are in it. Once the reader is done executing the entry section, it will unlock it by signaling the mutex. Signaling it is equivalent to: mutex.V() in the above code. Same is valid for the <EXIT Section>. There can be no more than a single reader in the exit section at a time, therefore, every reader must claim and lock the Exit section for themselves before using it. Once the first reader is in the entry section, it will lock the resource. Doing this will prevent any writers from accessing it. Subsequent readers can just utilize the locked (from writers) resource. The reader to finish last (indicated by thereadcountvariable) must unlock the resource, thus making it available to writers. In this solution, every writer must claim the resource individually. This means that a stream of readers can subsequently lock all potential writers out and starve them. This is so, because after the first reader locks the resource, no writer can lock it, before it gets released. And it will only be released by the last reader. Hence, this solution does not satisfy fairness. The first solution is suboptimal, because it is possible that a readerR1might have the lock, a writerWbe waiting for the lock, and then a readerR2requests access. It would be unfair forR2to jump in immediately, ahead ofW; if that happened often enough,Wwouldstarve. Instead,Wshould start as soon as possible. This is the motivation for thesecond readers–writers problem, in which the constraint is added thatno writer, once added to the queue, shall be kept waiting longer than absolutely necessary. This is also calledwriters-preference. A solution to the writers-preference scenario is:[2] In this solution, preference is given to the writers. This is accomplished by forcing every reader to lock and release thereadtrysemaphore individually. The writers on the other hand don't need to lock it individually. Only the first writer will lock thereadtryand then all subsequent writers can simply use the resource as it gets freed by the previous writer. The very last writer must release thereadtrysemaphore, thus opening the gate for readers to try reading. No reader can engage in the entry section if thereadtrysemaphore has been set by a writer previously. The reader must wait for the last writer to unlock the resource andreadtrysemaphores. On the other hand, if a particular reader has locked thereadtrysemaphore, this will indicate to any potential concurrent writer that there is a reader in the entry section. So the writer will wait for the reader to release thereadtryand then the writer will immediately lock it for itself and all subsequent writers. However, the writer will not be able to access the resource until the current reader has released the resource, which only occurs after the reader is finished with the resource in the critical section. The resource semaphore can be locked by both the writer and the reader in their entry section. They are only able to do so after first locking thereadtrysemaphore, which can only be done by one of them at a time. It will then take control over the resource as soon as the current reader is done reading and lock all future readers out. All subsequent readers will hang up at thereadtrysemaphore waiting for the writers to be finished with the resource and to open the gate by releasingreadtry. Thermutexandwmutexare used in exactly the same way as in the first solution. Their sole purpose is to avoid race conditions on the readers and writers while they are in their entry or exit sections. In fact, the solutions implied by both problem statements can result in starvation — the first one may starve writers in the queue, and the second one may starve readers. Therefore, thethird readers–writers problemis sometimes proposed, which adds the constraint thatno thread shall be allowed to starve; that is, the operation of obtaining a lock on the shared data will always terminate in a bounded amount of time. A solution with fairness for both readers and writers might be as follows: This solution can only satisfy the condition that "no thread shall be allowed to starve" if and only if semaphores preserve first-in first-out ordering when blocking and releasing threads. Otherwise, a blocked writer, for example, may remain blocked indefinitely with a cycle of other writers decrementing the semaphore before it can. The simplest reader writer problem which uses only two semaphores and doesn't need an array of readers to read the data in buffer. Please notice that this solution gets simpler than the general case because it is made equivalent to theBounded bufferproblem, and therefore onlyNreaders are allowed to enter in parallel,Nbeing the size of the buffer. The initial value ofreadandwritesemaphores are 0 and N respectively. In writer, the value of write semaphore is given to read semaphore and in reader, the value of read is given to write on completion of the loop.
https://en.wikipedia.org/wiki/Readers-writers_problem
Inmachine learning,semantic analysisof atext corpusis the task of building structures that approximate concepts from a large set of documents. It generally does not involve prior semantic understanding of the documents. Semantic analysis strategies include: Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_analysis_(machine_learning)
Connectionist expert systemsareartificial neural network(ANN) basedexpert systemswhere the ANN generates inferencing rules e.g., fuzzy-multi layerperceptronwhere linguistic and natural form of inputs are used. Apart from that, roughset theorymay be used for encoding knowledge in the weights better and alsogenetic algorithmsmay be used to optimize the search solutions better. Symbolic reasoning methods may also be incorporated (seehybrid intelligent system). (Also seeexpert system,neural network,clinical decision support system.) This robotics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Connectionist_expert_system
Reaction–diffusion systemsare mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: localchemical reactionsin which the substances are transformed into each other, anddiffusionwhich causes the substances to spread out over a surface in space. Reaction–diffusion systems are naturally applied inchemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found inbiology,geologyandphysics(neutron diffusion theory) andecology. Mathematically, reaction–diffusion systems take the form of semi-linearparabolic partial differential equations. They can be represented in the general form whereq(x,t)represents the unknown vector function,Dis adiagonal matrixofdiffusion coefficients, andRaccounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation oftravelling wavesand wave-like phenomena as well as otherself-organizedpatternslike stripes, hexagons or more intricate structure likedissipative solitons. Such patterns have been dubbed "Turing patterns".[1]Each function, for which a reaction diffusion differential equation holds, represents in fact aconcentration variable. The simplest reaction–diffusion equation is in one spatial dimension in plane geometry, is also referred to as theKolmogorov–Petrovsky–Piskunov equation.[2]If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation isFick's second law. The choiceR(u) =u(1 −u)yieldsFisher's equationthat was originally used to describe the spreading of biologicalpopulations,[3]the Newell–Whitehead-Segel equation withR(u) =u(1 −u2)to describeRayleigh–Bénard convection,[4][5]the more generalZeldovich–Frank-Kamenetskii equationwithR(u) =u(1 −u)e-β(1-u)and0 <β<∞(Zeldovich number) that arises incombustiontheory,[6]and its particular degenerate case withR(u) =u2−u3that is sometimes referred to as the Zeldovich equation as well.[7] The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form and therefore describes a permanent decrease of the "free energy"L{\displaystyle {\mathfrak {L}}}given by the functional with a potentialV(u)such thatR(u) =⁠dV(u)/du⁠. In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the formu(x,t) =û(ξ)withξ=x−ct, wherecis the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. Forc= 0, there is a simple proof for this statement:[8]ifu0(x)is a stationary solution andu=u0(x) +ũ(x,t)is an infinitesimally perturbed solution, linear stability analysis yields the equation With the ansatzũ=ψ(x)exp(−λt)we arrive at the eigenvalue problem ofSchrödinger typewhere negative eigenvalues result in the instability of the solution. Due to translational invarianceψ= ∂xu0(x)is a neutraleigenfunctionwith theeigenvalueλ= 0, and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunctionψ= ∂xu0(x)should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalueλ= 0cannot be the lowest one, thereby implying instability. To determine the velocitycof a moving front, one may go to a moving coordinate system and look at stationary solutions: This equation has a nice mechanical analogue as the motion of a massDwith positionûin the course of the "time"ξunder the forceRwith the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination ofc. When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the localradius of curvature(this can be seen by going topolar coordinates). This phenomenon leads to the so-called curvature-driven instability.[9] Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed byAlan Turingis that a state that is stable in the local system can become unstable in the presence ofdiffusion.[10] A linear stability analysis however shows that when linearizing the general two-component system aplane waveperturbation of the stationary homogeneous solution will satisfy Turing's idea can only be realized in fourequivalence classesof systems characterized by the signs of theJacobianR′of the reaction function. In particular, if a finite wave vectorkis supposed to be the most unstable one, the Jacobian must have the signs This class of systems is namedactivator-inhibitor systemafter its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is theFitzHugh–Nagumo equation withf(u) =λu−u3−κwhich describes how anaction potentialtravels through a nerve.[11][12]Here,du,dv,τ,σandλare positive constants. When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The correspondingbifurcationmay be either aHopf bifurcationto a globally oscillating homogeneous state with a dominant wave numberk= 0or aTuring bifurcationto a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns. For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in thehystereticregion where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known asperiodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle[13] For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. theBelousov–Zhabotinsky reaction,[14]forblood clotting,[15]fission waves[16]or planargas dischargesystems.[17] It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback).[18]An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in.[19] In recent times, reaction–diffusion systems have attracted much interest as a prototype model forpattern formation.[20]The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected tomorphogenesisin biology[21][22]and may even be related to animal coats and skin pigmentation.[23][24]Other applications of reaction–diffusion equations include ecological invasions,[25]spread of epidemics,[26]tumour growth,[27][28][29]dynamics of fission waves,[30]wound healing[31]and visual hallucinations.[32]Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment.[8][9][33][34][35][20] Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors[36]or filled capillary tubes[37]may be used. Second,temperaturepulses oncatalytic surfaceshave been investigated.[38][39]Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems.[11][40] Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas[41]or semiconductors[42]can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out. A reaction–diffusion system can be solved by using methods ofnumerical mathematics. There exist several numerical treatments in research literature.[43][20][44]Numerical solution methods for complexgeometriesare also proposed.[45][46]Reaction-diffusion systems are described to the highest degree of detail with particle based simulation tools like SRSim or ReaDDy[47]which employ among others reversible interacting-particle reaction dynamics.[48]
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_system
In mathematics, aneigenoperator,A, of amatrixHis alinear operatorsuch that whereλ{\displaystyle \lambda }is a correspondingscalarcalled aneigenvalue.[1] This article aboutmatricesis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Eigenoperator
Deep learningis a subset ofmachine learningthat focuses on utilizing multilayeredneural networksto perform tasks such asclassification,regression, andrepresentation learning. The field takes inspiration frombiological neuroscienceand is centered around stackingartificial neuronsinto layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be eithersupervised,semi-supervisedorunsupervised.[2] Some common deep learning network architectures includefully connected networks,deep belief networks,recurrent neural networks,convolutional neural networks,generative adversarial networks,transformers, andneural radiance fields. These architectures have been applied to fields includingcomputer vision,speech recognition,natural language processing,machine translation,bioinformatics,drug design,medical image analysis,climate science, material inspection andboard gameprograms, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5] Early forms of neural networks were inspired by information processing and distributed communication nodes inbiological systems, particularly thehuman brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.[6] Most modern deep learning models are based on multi-layeredneural networkssuch asconvolutional neural networksandtransformers, although they can also includepropositional formulasor latent variables organized layer-wise in deepgenerative modelssuch as the nodes indeep belief networksand deepBoltzmann machines.[7] Fundamentally, deep learning refers to a class ofmachine learningalgorithmsin which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in animage recognitionmodel, the raw input may be animage(represented as atensorofpixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which levelon its own. Prior to deep learning, machine learning techniques often involved hand-craftedfeature engineeringto transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the modeldiscoversuseful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.[8][2] The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantialcredit assignment path(CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For afeedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). Forrecurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited.[9]No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function.[10]Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with agreedylayer-by-layer method.[11]Deep learning helps to disentangle these abstractions and pick out which features improve performance.[8] Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner aredeep belief networks.[8][12] The termDeep Learningwas introduced to the machine learning community byRina Dechterin 1986,[13]and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context ofBooleanthreshold neurons.[14][15]Although the history of its appearance is apparently more complicated.[16] Deep neural networks are generally interpreted in terms of theuniversal approximation theorem[17][18][19][20][21]orprobabilistic inference.[22][23][8][9][24] The classic universal approximation theorem concerns the capacity offeedforward neural networkswith a single hidden layer of finite size to approximatecontinuous functions.[17][18][19][20]In 1989, the first proof was published byGeorge Cybenkoforsigmoidactivation functions[17]and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik.[18]Recent work also showed that universal approximation also holds for non-bounded activation functions such asKunihiko Fukushima'srectified linear unit.[25][26] The universal approximation theorem fordeep neural networksconcerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al.[21]proved that if the width of a deep neural network withReLUactivation is strictly larger than the input dimension, then the network can approximate anyLebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. Theprobabilisticinterpretation[24]derives from the field ofmachine learning. It features inference,[23][7][8][9][12][24]as well as theoptimizationconcepts oftrainingandtesting, related to fitting andgeneralization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as acumulative distribution function.[24]The probabilistic interpretation led to the introduction ofdropoutasregularizerin neural networks. The probabilistic interpretation was introduced by researchers includingHopfield,WidrowandNarendraand popularized in surveys such as the one byBishop.[27] There are twotypesof artificial neural network (ANN):feedforward neural network(FNN) ormultilayer perceptron(MLP) andrecurrent neural networks(RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s,Wilhelm LenzandErnst Isingcreated theIsing model[28][29]which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972,Shun'ichi Amarimade this architecture adaptive.[30][31]His learning RNN was republished byJohn Hopfieldin 1982.[32]Other earlyrecurrent neural networkswere published by Kaoru Nakano in 1971.[33][34]Already in 1948,Alan Turingproduced work on "Intelligent Machinery" that was not published in his lifetime,[35]containing "ideas related to artificial evolution and learning RNNs".[31] Frank Rosenblatt(1958)[36]proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).[37]: section 16The book cites an earlier network by R. D. Joseph (1960)[38]"functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptivemultilayer perceptronswith learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was theGroup method of data handling, a method to train arbitrarily deep neural networks, published byAlexey Ivakhnenkoand Lapa in 1965. They regarded it as a form of polynomial regression,[39]or a generalization of Rosenblatt's perceptron.[40]A 1971 paper described a deep network with eight layers trained by this method,[41]which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".[31] The first deep learningmultilayer perceptrontrained bystochastic gradient descent[42]was published in 1967 byShun'ichi Amari.[43]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learnedinternal representationsto classify non-linearily separable pattern classes.[31]Subsequent developments in hardware and hyperparameter tunings have made end-to-endstochastic gradient descentthe currently dominant training technique. In 1969,Kunihiko Fukushimaintroduced theReLU(rectified linear unit)activation function.[25][31]The rectifier has become the most popular activation function for deep learning.[44] Deep learning architectures forconvolutional neural networks(CNNs) with convolutional layers and downsampling layers began with theNeocognitronintroduced byKunihiko Fukushimain 1979, though not trained by backpropagation.[45][46] Backpropagationis an efficient application of thechain rulederived byGottfried Wilhelm Leibnizin 1673[47]to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt,[37]but he did not know how to implement this, althoughHenry J. Kelleyhad a continuous precursor of backpropagation in 1960 in the context ofcontrol theory.[48]The modern form of backpropagation was first published inSeppo Linnainmaa's master thesis (1970).[49][50][31]G.M. Ostrovski et al. republished it in 1971.[51][52]Paul Werbosapplied backpropagation to neural networks in 1982[53](his 1974 PhD thesis, reprinted in a 1994 book,[54]did not yet describe the algorithm[52]). In 1986,David E. Rumelhartet al. popularised backpropagation but did not cite the original work.[55][56] Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelto apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.[57][58]In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.[59]In 1989,Yann LeCunet al. created a CNN calledLeNetforrecognizing handwritten ZIP codeson mail. Training required 3 days.[60]In 1990, Wei Zhang implemented a CNN onoptical computinghardware.[61]In 1991, a CNN was applied to medical image object segmentation[62]and breast cancer detection in mammograms.[63]LeNet-5 (1998), a 7-level CNN byYann LeCunet al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.[64] Recurrent neural networks(RNN)[28][30]were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were theJordan network(1986)[65]and theElman network(1990),[66]which applied RNN to study problems incognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991,Jürgen Schmidhuberproposed a hierarchy of RNNs pre-trained one level at a time byself-supervised learningwhere each RNN tries to predict its own next input, which is the next unexpected input of the RNN below.[67][68]This "neural history compressor" usespredictive codingto learninternal representationsat multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can becollapsedinto a single RNN, bydistillinga higher levelchunkernetwork into a lower levelautomatizernetwork.[67][68][31]In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[69]The "P" inChatGPTrefers to such pre-training. Sepp Hochreiter's diploma thesis (1991)[70]implemented the neural history compressor,[67]and identified and analyzed thevanishing gradient problem.[70][71]Hochreiter proposed recurrentresidualconnections to solve the vanishing gradient problem. This led to thelong short-term memory(LSTM), published in 1995.[72]LSTM can learn "very deep learning" tasks[9]with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999,[73]which became the standard RNN architecture. In 1991,Jürgen Schmidhuberalso published adversarial neural networks that contest with each other in the form of azero-sum game, where one network's gain is the other network's loss.[74][75]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used ingenerative adversarial networks(GANs).[76] During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed byTerry Sejnowski,Peter Dayan,Geoffrey Hinton, etc., including theBoltzmann machine,[77]restricted Boltzmann machine,[78]Helmholtz machine,[79]and thewake-sleep algorithm.[80]These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112[81]). A 1988 network became state of the art inprotein structure prediction, an early application of deep learning to bioinformatics.[82] Both shallow and deep learning (e.g., recurrent nets) of ANNs forspeech recognitionhave been explored for many years.[83][84][85]These methods never outperformed non-uniform internal-handcrafting Gaussianmixture model/Hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[86]Key difficulties have been analyzed, including gradient diminishing[70]and weak temporal correlation structure in neural predictive models.[87][88]Additional difficulties were the lack of training data and limited computing power. Mostspeech recognitionresearchers moved away from neural nets to pursue generative modeling. An exception was atSRI Internationalin the late 1990s. Funded by the US government'sNSAandDARPA, SRI researched in speech andspeaker recognition. The speaker recognition team led byLarry Heckreported significant success with deep neural networks in speech processing in the 1998NISTSpeaker Recognition benchmark.[89][90]It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.[91] The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linearfilter-bankfeatures in the late 1990s,[90]showing its superiority over theMel-Cepstralfeatures that contain stages of fixed transformation from spectrograms. The raw features of speech,waveforms, later produced excellent larger-scale results.[92] Neural networks entered a lull, and simpler models that use task-specific handcrafted features such asGabor filtersandsupport vector machines(SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.[citation needed] In 2003, LSTM became competitive with traditional speech recognizers on certain tasks.[93]In 2006,Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it withconnectionist temporal classification(CTC)[94]in stacks of LSTMs.[95]In 2009, it became the first RNN to win apattern recognitioncontest, in connectedhandwriting recognition.[96][9] In 2006, publications byGeoff Hinton,Ruslan Salakhutdinov, Osindero andTeh[97][98]deep belief networkswere developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionallyfine-tunedusing supervised backpropagation.[99]They could model high-dimensional probability distributions, such as the distribution ofMNIST images, but convergence was slow.[100][101][102] The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun.[103]Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.[104]The nature of the recognition errors produced by the two types of systems was characteristically different,[105]offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.[23][106][107]Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition.[105]That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.[104][105][108]In 2010, researchers extended deep learning fromTIMITto large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed bydecision trees.[109][110][111][106] The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years,[112]including CNNs,[113]faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.[114] A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004.[112][113]In 2009, Raina, Madhavan, andAndrew Ngreported a 100M deep belief network trained on 30 NvidiaGeForce GTX 280GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.[115] In 2011, a CNN namedDanNet[116][117]by Dan Ciresan, Ueli Meier, Jonathan Masci,Luca Maria Gambardella, andJürgen Schmidhuberachieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3.[9]It then won more contests.[118][119]They also showed howmax-poolingCNNs on GPU improved performance significantly.[3] In 2012,Andrew NgandJeff Deancreated an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken fromYouTubevideos.[120] In October 2012,AlexNetbyAlex Krizhevsky,Ilya Sutskever, andGeoffrey Hinton[4]won the large-scaleImageNet competitionby a significant margin over shallow machine learning methods. Further incremental improvements included theVGG-16network byKaren SimonyanandAndrew Zisserman[121]and Google'sInceptionv3.[122] The success in image classification was then extended to the more challenging task ofgenerating descriptions(captions) for images, often as a combination of CNNs and LSTMs.[123][124][125] In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers.[126]Stacking too many layers led to a steep reduction intrainingaccuracy,[127]known as the "degradation" problem.[128]In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and theresidual neural network(ResNet)[129]in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples includedGoogle DeepDream(2015), andneural style transfer(2015),[130]both of which were based on pretrained image classification neural networks, such asVGG-19. Generative adversarial network(GAN) by (Ian Goodfellowet al., 2014)[131](based onJürgen Schmidhuber's principle of artificial curiosity[74][76]) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved byNvidia'sStyleGAN(2018)[132]based on the Progressive GAN by Tero Karras et al.[133]Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerningdeepfakes.[134]Diffusion models(2015)[135]eclipsed GANs in generative modeling since then, with systems such asDALL·E 2(2022) andStable Diffusion(2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available throughGoogle Voice Searchonsmartphone.[136][137] Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision andautomatic speech recognition(ASR). Results on commonly used evaluation sets such asTIMIT(ASR) andMNIST(image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved.[104][138]Convolutional neural networks were superseded for ASR byLSTM.[137][139][140][141]but are more successful in computer vision. Yoshua Bengio,Geoffrey HintonandYann LeCunwere awarded the 2018Turing Awardfor "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".[142] Artificial neural networks(ANNs) orconnectionistsystemsare computing systems inspired by thebiological neural networksthat constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manuallylabeledas "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm usingrule-based programming. An ANN is based on a collection of connected units calledartificial neurons, (analogous to biologicalneuronsin abiological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented byreal numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such asbackpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesand medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"[144]). A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers.[7][9]There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions.[145]These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.[citation needed] For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer,[146]and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition ofprimitives.[147]The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.[7]For instance, it was proved that sparsemultivariate polynomialsare exponentially easier to approximate with DNNs than with shallow networks.[148] Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.[146] DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights.[149]That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such aslanguage modeling.[150][151][152][153][154]Long short-term memory is particularly effective for this use.[155][156] Convolutional neural networks(CNNs) are used in computer vision.[157]CNNs also have been applied toacoustic modelingfor automatic speech recognition (ASR).[158] As with ANNs, many issues can arise with naively trained DNNs. Two common issues areoverfittingand computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data.Regularizationmethods such as Ivakhnenko's unit pruning[41]orweight decay(ℓ2{\displaystyle \ell _{2}}-regularization) orsparsity(ℓ1{\displaystyle \ell _{1}}-regularization) can be applied during training to combat overfitting.[159]Alternativelydropoutregularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies.[160]Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction.[161]Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.[162] DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), thelearning rate, and initial weights.Sweeping through the parameter spacefor optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such asbatching(computing the gradient on several training examples at once rather than individual examples)[163]speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.[164][165] Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.[166][167] Since the 2010s, advances in both machine learning algorithms andcomputer hardwarehave led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[168]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI .[169]OpenAIestimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.[170][171] Specialelectronic circuitscalleddeep learning processorswere designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) inHuaweicellphones[172]andcloud computingservers such astensor processing units(TPU) in theGoogle Cloud Platform.[173]Cerebras Systemshas also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).[174][175] Atomically thinsemiconductorsare considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based onfloating-gatefield-effect transistors(FGFETs).[176] In 2021, J. Feldmann et al. proposed an integratedphotonichardware acceleratorfor parallel convolutional processing.[177]The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer throughwavelengthdivisionmultiplexingin conjunction withfrequency combs, and (2) extremely high data modulation speeds.[177]Their system can execute trillions of multiply-accumulate operations per second, indicating the potential ofintegratedphotonicsin data-heavy AI applications.[177] Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks[9]that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates[156]is competitive with traditional speech recognizers on certain tasks.[93] The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight majordialectsofAmerican English, where each speaker reads 10 sentences.[178]Its small size lets many configurations be tried. More importantly, the TIMIT task concernsphone-sequence recognition, which, unlike word-sequence recognition, allows weak phonebigramlanguage models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:[23][108][106] All major commercial speech recognition systems (e.g., MicrosoftCortana,Xbox,Skype Translator,Amazon Alexa,Google Now,Apple Siri,BaiduandiFlyTekvoice search, and a range ofNuancespeech products, etc.) are based on deep learning.[23][183][184] A common evaluation set for image classification is theMNIST databasedata set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.[185] Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.[186][187] Deep learning-trained vehicles now interpret 360° camera views.[188]Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of Neural networks have been used for implementing language models since the early 2000s.[150]LSTM helped to improve machine translation and language modeling.[151][152][153] Other key techniques in this field are negative sampling[191]andword embedding. Word embedding, such asword2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in avector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of asprobabilistic context free grammar(PCFG) implemented by an RNN.[192]Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.[192]Deep neural architectures provide the best results for constituency parsing,[193]sentiment analysis,[194]information retrieval,[195][196]spoken language understanding,[197]machine translation,[151][198]contextual entity linking,[198]writing style recognition,[199]named-entity recognition(token classification),[200]text classification, and others.[201] Recent developments generalizeword embeddingtosentence embedding. Google Translate(GT) uses a large end-to-endlong short-term memory(LSTM) network.[202][203][204][205]Google Neural Machine Translation (GNMT)uses anexample-based machine translationmethod in which the system "learns from millions of examples".[203]It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages.[203]The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations".[203][206]GT uses English as an intermediate between most language pairs.[206] A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipatedtoxic effects.[207][208]Research has explored use of deep learning to predict thebiomolecular targets,[209][210]off-targets, andtoxic effectsof environmental chemicals in nutrients, household products and drugs.[211][212][213] AtomNet is a deep learning system for structure-basedrational drug design.[214]AtomNet was used to predict novel candidate biomolecules for disease targets such as theEbola virus[215]andmultiple sclerosis.[216][215] In 2017graph neural networkswere used for the first time to predict various properties of molecules in a large toxicology data set.[217]In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.[218][219] Deep reinforcement learninghas been used to approximate the value of possibledirect marketingactions, defined in terms ofRFMvariables. The estimated value function was shown to have a natural interpretation ascustomer lifetime value.[220] Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations.[221][222]Multi-view deep learning has been applied for learning user preferences from multiple domains.[223]The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. AnautoencoderANN was used inbioinformatics, to predictgene ontologyannotations and gene-function relationships.[224] In medical informatics, deep learning was used to predict sleep quality based on data from wearables[225]and predictions of health complications fromelectronic health recorddata.[226] Deep neural networks have shown unparalleled performance inpredicting protein structure, according to the sequence of the amino acids that make it up. In 2020,AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.[227][228] Deep neural networks can be used to estimate the entropy of astochastic processand called Neural Joint Entropy Estimator (NJEE).[229]Such an estimation provides insights on the effects of inputrandom variableson an independentrandom variable. Practically, the DNN is trained as aclassifierthat maps an inputvectorormatrixX to an outputprobability distributionover the possible classes of random variable Y, given input X. For example, inimage classificationtasks, the NJEE maps a vector ofpixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by aSoftmaxlayer with number of nodes that is equal to thealphabetsize of Y. NJEE uses continuously differentiableactivation functions, such that the conditions for theuniversal approximation theoremholds. It is shown that this method provides a stronglyconsistent estimatorand outperforms other methods in case of large alphabet sizes.[229] Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement.[230][231]Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.[232][233] Finding the appropriate mobile audience formobile advertisingis always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server.[234]Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. Deep learning has been successfully applied toinverse problemssuch asdenoising,super-resolution,inpainting, andfilm colorization.[235]These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration"[236]which trains on an image dataset, andDeep Image Prior, which trains on the image that needs restoration. Deep learning is being successfully applied to financialfraud detection, tax evasion detection,[237]and anti-money laundering.[238] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[239][240][241] The United States Department of Defense applied deep learning to train robots in new tasks through observation.[242] Physics informed neural networks have been used to solvepartial differential equationsin both forward and inverse problems in a data driven manner.[243]One example is the reconstructing fluid flow governed by theNavier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventionalCFDmethods rely on.[244][245] Deep backward stochastic differential equation methodis a numerical method that combines deep learning withBackward stochastic differential equation(BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities ofdeep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.[246] In addition, the integration ofPhysics-informed neural networks(PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging[247]and ultrasound imaging.[248] Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.[249][250] An epigenetic clock is abiochemical testthat can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples.[251]The clock uses information from 1000CpG sitesand predicts people with certain conditions older than healthy controls:IBD,frontotemporal dementia,ovarian cancer,obesity. The aging clock was planned to be released for public use in 2021 by anInsilico Medicinespinoff company Deep Longevity. Deep learning is closely related to a class of theories ofbrain development(specifically, neocortical development) proposed bycognitive neuroscientistsin the early 1990s.[252][253][254][255]These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave ofnerve growth factor) support theself-organizationsomewhat analogous to the neural networks utilized in deep learning models. Like theneocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack oftransducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".[256] A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of thebackpropagationalgorithm have been proposed in order to increase its processing realism.[257][258]Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchicalgenerative modelsanddeep belief networks, may be closer to biological reality.[259][260]In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.[261] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons[262]and neural populations.[263]Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system[264]both at the single-unit[265]and at the population[266]levels. Facebook's AI lab performs tasks such asautomatically tagging uploaded pictureswith the names of the people in them.[267] Google'sDeepMind Technologiesdeveloped a system capable of learning how to playAtarivideo games using only pixels as data input. In 2015 they demonstrated theirAlphaGosystem, which learned the game ofGowell enough to beat a professional Go player.[268][269][270]Google Translateuses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.[271] As of 2008,[272]researchers atThe University of Texas at Austin(UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor.[242]First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration betweenU.S. Army Research Laboratory(ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation.[242]Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".[273] Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. A main criticism concerns the lack of theory surrounding some methods.[274]Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[citation needed](e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as ablack box, with most confirmations done empirically, rather than theoretically.[275] In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained[276]demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article onThe Guardian's[277]website. Some deep learning architectures display problematic behaviors,[278]such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014)[279]and misclassifying minuscule perturbations of correctly classified images (2013).[280]Goertzelhypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-componentartificial general intelligence(AGI) architectures.[278]These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar[281]decompositions of observed entities and events.[278]Learning a grammar(visual or linguistic) from training data would be equivalent to restricting the system tocommonsense reasoningthat operates on concepts in terms of grammaticalproduction rulesand is a basic goal of both human language acquisition[282]andartificial intelligence(AI).[283] As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception.[284]By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".[285] In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system.[286]One defense is reverse image search, in which a possible fake image is submitted to a site such asTinEyethat can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.[287] Another group showed that certainpsychedelicspectacles could fool afacial recognition systeminto thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers tostop signsand caused an ANN to misclassify them.[286] ANNs can however be further trained to detect attempts atdeception, potentially leading attackers and defenders into an arms race similar to the kind that already defines themalwaredefense industry. ANNs have been trained to defeat ANN-based anti-malwaresoftware by repeatedly attacking a defense with malware that was continually altered by agenetic algorithmuntil it tricked the anti-malware while retaining its ability to damage the target.[286] In 2016, another group demonstrated that certain sounds could make theGoogle Nowvoice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".[286] In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.[286] The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both.[288]It has been argued that not only low-paidclickwork(such as onAmazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of humanmicroworkthat are often not recognized as such.[289]The philosopherRainer Mühlhoffdistinguishes five types of "machinic capture" of human microwork to generate training data: (1)gamification(the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g.CAPTCHAsfor image recognition or click-tracking on Googlesearch results pages), (3) exploitation of social motivations (e.g.tagging facesonFacebookto obtain labeled facial images), (4)information mining(e.g. by leveragingquantified-selfdevices such asactivity trackers) and (5)clickwork.[289]
https://en.wikipedia.org/wiki/Applications_of_deep_learning
TWINKLE(TheWeizmann InstituteKey Locating Engine) is a hypotheticalinteger factorizationdevice described in 1999 byAdi Shamir[1]and purported to be capable of factoring 512-bit integers.[2]It is also a pun on the twinklingLEDsused in the device. Shamir estimated that the cost of TWINKLE could be as low as $5000 per unit with bulk production. TWINKLE has a successor namedTWIRL[3]which is more efficient. The goal of TWINKLE is to implement the sieving step of theNumber Field Sievealgorithm, which is the fastest known algorithm for factoring large integers. The sieving step, at least for 512-bit and larger integers, is the most time consuming step of NFS. It involves testing a large set of numbers for B-'smoothness', i.e., absence of aprime factorgreater than a specified bound B. What is remarkable about TWINKLE is that it is not a purely digital device. It gets its efficiency by eschewingbinary arithmeticfor an "optical" adder which can add hundreds of thousands of quantities in a single clock cycle. The key idea used is "time-space inversion". Conventional NFS sieving is carried out one prime at a time. For each prime, all the numbers to be tested for smoothness in the range under consideration which are divisible by that prime have their counter incremented by thelogarithmof the prime (similar to thesieve of Eratosthenes). TWINKLE, on the other hand, works one candidate smooth number (call it X) at a time. There is one LED corresponding to each prime smaller than B. At the time instant corresponding to X, the set of LEDs glowing corresponds to the set of primes that divide X. This can be accomplished by having the LED associated with the primepglow once everyptime instants. Further, the intensity of each LED is proportional to the logarithm of the corresponding prime. Thus, the total intensity equals the sum of the logarithms of all the prime factors of X smaller than B. This intensity is equal to the logarithm of X if and only if X is B-smooth. Even in PC-based implementations, it's a common optimization to speed up sieving by adding approximate logarithms of small primes together. Similarly, TWINKLE has much room for error in its light measurements; as long as the intensity is at about the right level, the number is very likely to be smooth enough for the purposes of known factoring algorithms. The existence of even one large factor would imply that the logarithm of a large number is missing, resulting in a very low intensity; because most numbers have this property, the device's output would tend to consist of stretches of low intensity output with brief bursts of high intensity output. In the above it is assumed that X is square-free, i.e. it is not divisible by the square of any prime. This is acceptable since the factoring algorithms only require "sufficiently many" smooth numbers, and the "yield" decreases only by a small constant factor due to thesquare-freenessassumption. There is also the problem of false positives due to the inaccuracy of the optoelectronic hardware, but this is easily solved by adding a PC-based post-processing step for verifying the smoothness of the numbers identified by TWINKLE.
https://en.wikipedia.org/wiki/TWINKLE
Abusiness planis a formal written document containing the goals of abusiness, the methods for attaining those goals, and the time-frame for the achievement of the goals. It also describes the nature of the business, background information on theorganization, the organization's financial projections, and thestrategiesit intends to implement to achieve the stated targets. In its entirety, this document serves as a road-map (aplan) that provides direction to the business.[1][2] Written business plans are often required to obtain abank loanor other kind offinancing. Templates[3]and guides, such as the ones offered in the United States by theSmall Business Administration[4]can be used to facilitate producing a business plan. Business plans may be internally or externally focused. Externally-focused plans draft goals that are important to outside stakeholders, particularly financial stakeholders. These plans typically have detailed information about the organization or the team making effort to reach its goals. With for-profit entities, external stakeholders include investors and customers,[5]for non-profits, external stakeholders refer to donors andclients,[6]for government agencies, external stakeholders are the tax-payers, higher-level government agencies, and international lending bodies such as theInternational Monetary Fund, theWorld Bank, various economic agencies of theUnited Nations, anddevelopment banks. Internally-focused business plans target intermediate goals required to reach the external goals. They may cover the development of a new product, a new service, a new IT system, a restructuring of finance, the refurbishing of a factory or the restructuring of an organization. An internally-focused business plan is often developed in conjunction with abalanced scorecardorOGSMor a list of critical success factors. This allows the success of the plan to be measured using non-financial measures. Business plans that identify and target internal goals, but provide only general guidance on how they will be met are calledstrategic plans.[7] Operational plans describe the goals of an internal organization, working group or department.[8]Project plans, sometimes known as project frameworks, describe the goals of a particular project. They may also address the project's place within the organization's larger strategic goals.[9] Business plans are essentialdecision-making tools. The content and format of a business plan depend on its goals and target audience. For example, a business plan for a non-profit organization might emphasize how it aligns with the organization's mission. Banks are particularly concerned about defaults, so a business plan created for a bank loan should convincingly demonstrate the organization’s ability to repay the loan. On the other hand, venture capitalists focus on initial investments, feasibility, and exit valuation. A business plan for a project that requires equity financing needs to explain why current resources, upcoming growth opportunities, and sustainable competitive advantages will contribute to a high exit valuation. Creating a business plan requires knowledge from various business disciplines, including finance, human resource management, intellectual property management, supply chain management, operations management, and marketing, among others. It can be helpful to view the business plan as a compilation of sub-plans, each addressing a key business discipline. A well-crafted business plan can help establish credibility, clarity, and appeal for individuals unfamiliar with the business. While writing a good business plan cannot guarantee success, it can significantly reduce the likelihood of failure. The process of creating a business plan involves five distinct steps. The first step is to clearly outline the main business concept. The second step is to gather data regarding the feasibility of your idea, including specific details about your business. The third step involves organizing the information you have collected and refining your plan. After that, you can start drafting an outline of your business plan, detailing the specifics of your business idea. The fifth and final step is to compile this information into a compelling format that will encourage potential investors to support your business.[10] The format of a business plan depends on its presentation context. It is common for businesses, especially start-ups, to have three or four formats for the same business plan. An "elevator pitch" is a short summary of the plan's executive summary. This is often used as a teaser to awaken the interest of potential investors, customers, or strategic partners. It is called an elevator pitch as it is supposed to be content that can be explained to someone else quickly in an elevator. The elevator pitch should be between 30 and 60 seconds.[11] Apitch deckis a slide show and oral presentation that is meant to trigger discussion and interest potential investors in reading the written presentation. The content of the presentation is usually limited to the executive summary and a few key graphs showing financial trends and key decision-making benchmarks. If a new product is being proposed and time permits, a demonstration of the product may be included.[12] A written presentation for external stakeholders is a detailed, well written, and pleasingly formatted plan targeted at external stakeholders. An internal operational plan is a detailed plan describing planning details that are needed by management but may not be of interest to external stakeholders. Such plans have a somewhat higher degree of candor and informality than the version targeted at external stakeholders and others. Typical structure for a business plan for a start-up venture[13] Typical questions addressed by a business plan for a start-up venture[14] Cost andrevenueestimates are central to any business plan for deciding the viability of the planned venture. But costs are often underestimated and revenues overestimated resulting in latercost overruns, revenue shortfalls, and possibly non-viability. During thedot-com bubble1997-2001 this was a problem for many technology start-ups.Reference class forecastinghas been developed to reduce the risks of cost overruns and revenue shortfalls and thus generate more accurate business plans. Fundraising is the primary purpose of many business plans since they are related to the inherent probable success/failure of the company risk. Business goals can be defined for both non-profit and for-profit organizations. For-profit business plans typically emphasize financial objectives, such as generating profit and creating wealth. In contrast, non-profit organizations and government agencies often center their plans around their "organizational mission," which underpins their tax-exempt status or governmental role. However, non-profits may also seek to optimize their revenue. The primary distinction between for-profit and non-profit organizations lies in their fundamental objectives. For-profit organizations aim to maximize wealth, while non-profit organizations focus on serving the greater good of society. In non-profits, a creative tension often emerges as they try to balance their mission-driven goals with the need to generate revenue and maintain financial sustainability. A notable real-life example of a business plan is the one created byTesla Motorsduring its early days.[15]Tesla's plan focused on revolutionizing the automotive industry by introducing electric vehicles (EVs) that were not only environmentally friendly but also high-performance and stylish. Key elements of their business plan included: Tesla's business plan attracted significant investor interest and laid the foundation for its success as a leader in the EV market.
https://en.wikipedia.org/wiki/Business_Plan
Acontinuous-time Markov chain(CTMC) is a continuousstochastic processin which, for each state, the process will change state according to anexponential random variableand then move to a different state as specified by the probabilities of astochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state. An example of a CTMC with three states{0,1,2}{\displaystyle \{0,1,2\}}is as follows: the process makes a transition after the amount of time specified by theholding time—an exponential random variableEi{\displaystyle E_{i}}, whereiis its current state. Each random variable is independent and such thatE0∼Exp(6){\displaystyle E_{0}\sim {\text{Exp}}(6)},E1∼Exp(12){\displaystyle E_{1}\sim {\text{Exp}}(12)}andE2∼Exp(18){\displaystyle E_{2}\sim {\text{Exp}}(18)}. When a transition is to be made, the process moves according to thejump chain, adiscrete-time Markov chainwith stochastic matrix: Equivalently, by the property ofcompeting exponentials, this CTMC changes state from stateiaccording to the minimum of two random variables, which are independent and such thatEi,j∼Exp(qi,j){\displaystyle E_{i,j}\sim {\text{Exp}}(q_{i,j})}fori≠j{\displaystyle i\neq j}where the parameters are given by theQ-matrixQ=(qi,j){\displaystyle Q=(q_{i,j})} Each non-diagonal entryqi,j{\displaystyle q_{i,j}}can be computed as the probability that the jump chain moves from stateito statej, divided by the expected holding time of statei. The diagonal entries are chosen so that each row sums to 0. A CTMC satisfies theMarkov property, that its behavior depends only on its current state and not on its past behavior, due to the memorylessness of the exponential distribution and of discrete-time Markov chains. Let(Ω,A,Pr){\displaystyle (\Omega ,{\cal {A}},\Pr )}be a probability space, letS{\displaystyle S}be a countable nonempty set, and letT=R≥0{\displaystyle T=\mathbb {R} _{\geq 0}}(T{\displaystyle T}for "time"). EquipS{\displaystyle S}with thediscrete metric, so that we can make sense ofright continuityof functionsR≥0→S{\displaystyle \mathbb {R} _{\geq 0}\to S}. A continuous-time Markov chain is defined by:[1] Note that the row sums ofQ{\displaystyle Q}are 0:∀i∈S,∑j∈Sqi,j=0,{\displaystyle \forall i\in S,~\sum _{j\in S}q_{i,j}=0,}or more succinctly,Q⋅1=0{\displaystyle Q\cdot 1=0}. This situation contrasts with the situation fordiscrete-time Markov chains, where all row sums of the transition matrix equal unity. Now, letX:T→SΩ{\displaystyle X:T\to S^{\Omega }}such that∀t∈TX(t){\displaystyle \forall t\in T~X(t)}is(A,P(S)){\displaystyle ({\cal {A}},{\cal {P}}(S))}-measurable. There are three equivalent ways to defineX{\displaystyle X}beingMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}: via transition probabilities or via the jump chain and holding times.[5] As a prelude to a transition-probability definition, we first motivate the definition of aregularrate matrix. We will use the transition-rate matrixQ{\displaystyle Q}to specify the dynamics of the Markov chain by means of generating a collection oftransition matricesP(t){\displaystyle P(t)}onS{\displaystyle S}(t∈R≥0{\displaystyle t\in \mathbb {R} _{\geq 0}}), via the following theorem. Existence of solution toKolmogorov backward equations([6])—There existsP∈([0,1]S×S)T{\displaystyle P\in ([0,1]^{S\times S})^{T}}such that for alli,j∈S{\displaystyle i,j\in S}the entry(P(t)i,j)t∈T{\displaystyle (P(t)_{i,j})_{t\in T}}is differentiable andP{\displaystyle P}satisfies theKolmogorov backward equations: We sayQ{\displaystyle Q}isregularto mean that we do have uniqueness for the above system, i.e., that there exists exactly one solution.[7][8]We sayQ{\displaystyle Q}isirregularto meanQ{\displaystyle Q}is not regular. IfS{\displaystyle S}is finite, then there is exactly one solution, namelyP=(etQ)t∈T,{\displaystyle P=(e^{tQ})_{t\in T},}and henceQ{\displaystyle Q}is regular. Otherwise,S{\displaystyle S}is infinite, and there exist irregular transition-rate matrices onS{\displaystyle S}.[a]IfQ{\displaystyle Q}is regular, then for the unique solutionP{\displaystyle P}, for eacht∈T{\displaystyle t\in T},P(t){\displaystyle P(t)}will be astochastic matrix.[6]We will assumeQ{\displaystyle Q}is regular from the beginning of the following subsection up through the end of this section, even though it is conventional[10][11][12]to not include this assumption. (Note for the expert: thus we are not defining continuous-time Markov chains in general but onlynon-explosivecontinuous-time Markov chains.) LetP{\displaystyle P}be the (unique) solution of the system (0). (Uniqueness guaranteed by our assumption thatQ{\displaystyle Q}is regular.) We sayX{\displaystyle X}isMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}to mean: for any nonnegative integern≥0{\displaystyle n\geq 0}, for allt0,…,tn+1∈T{\displaystyle t_{0},\dots ,t_{n+1}\in T}such thatt0<⋯<tn+1,{\displaystyle t_{0}<\dots <t_{n+1},}for alli0,…,in+1∈I,{\displaystyle i_{0},\dots ,i_{n+1}\in I,} Using induction and the fact that∀A,B∈APr(B)≠0→Pr(A∩B)=Pr(A∣B)Pr(B),{\displaystyle \forall A,B\in {\cal {A}}~~\Pr(B)\neq 0\rightarrow \Pr(A\cap B)=\Pr(A\mid B)\Pr(B),}we can show the equivalence of the above statement containing (1) and the following statement: for alli∈I,Pr(X0=i)=λi{\displaystyle i\in I,~\Pr(X_{0}=i)=\lambda _{i}}and for any nonnegative integern≥0{\displaystyle n\geq 0}, for allt0,…,tn+1∈T{\displaystyle t_{0},\dots ,t_{n+1}\in T}such thatt0<⋯<tn+1,{\displaystyle t_{0}<\dots <t_{n+1},}for alli0,…,in+1∈I{\displaystyle i_{0},\dots ,i_{n+1}\in I}such that0<Pr(X0=i0,…,Xtn=in){\displaystyle 0<\Pr(X_{0}=i_{0},\dots ,X_{t_{n}}=i_{n})}(it follows that0<Pr(Xtn=in){\displaystyle 0<\Pr(X_{t_{n}}=i_{n})}), It follows from continuity of the functions(P(t)i,j)t∈T{\displaystyle (P(t)_{i,j})_{t\in T}}(i,j∈S{\displaystyle i,j\in S}) that the trajectory(Xt(ω))t∈T{\displaystyle (X_{t}(\omega ))_{t\in T}}is almost surelyright continuous(with respect to thediscrete metriconS{\displaystyle S}): there exists aPr{\displaystyle \Pr }-null setN{\displaystyle N}such that{ω∈Ω:(Xt(ω))t∈Tis right continuous}⊆N{\displaystyle \{\omega \in \Omega :(X_{t}(\omega ))_{t\in T}{\text{ is right continuous}}\}\subseteq N}.[13] Letf:T→S{\displaystyle f:T\to S}be right continuous (when we equipS{\displaystyle S}with thediscrete metric). Define let be theholding-time sequenceassociated tof{\displaystyle f}, chooses∈S,{\displaystyle s\in S,}and let be "thestate sequence" associated tof{\displaystyle f}. Thejump matrixΠ{\displaystyle \Pi }, alternatively writtenΠ(Q){\displaystyle \Pi (Q)}if we wish to emphasize the dependence onQ{\displaystyle Q}, is the matrixΠ=([i=j])i∈Z,j∈S∪⋃i∈S∖Z({((i,j),(−Qi,i)−1Qi,j):j∈S∖{i}}∪{((i,i),0)}),{\displaystyle \Pi =([i=j])_{i\in Z,j\in S}\cup \bigcup _{i\in S\setminus Z}(\{((i,j),(-Q_{i,i})^{-1}Q_{i,j}):j\in S\setminus \{i\}\}\cup \{((i,i),0)\}),}whereZ=Z(Q)={k∈S:qk,k=0}{\displaystyle Z=Z(Q)=\{k\in S:q_{k,k}=0\}}is thezero setof the function(qk,k)k∈S.{\displaystyle (q_{k,k})_{k\in S}.}[14] We sayX{\displaystyle X}isMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}to mean: the trajectories ofX{\displaystyle X}are almost surely right continuous, letf{\displaystyle f}be a modification ofX{\displaystyle X}to have (everywhere) right-continuous trajectories,∑n∈Z≥0H(f(ω))n=+∞{\displaystyle \sum _{n\in \mathbb {Z} _{\geq 0}}H(f(\omega ))_{n}=+\infty }almost surely (note to experts: this condition saysX{\displaystyle X}is non-explosive), the state sequencey(f(ω)){\displaystyle y(f(\omega ))}is a discrete-time Markov chain with initial distributionλ{\displaystyle \lambda }(jump-chain property) and transition matrixΠ(Q),{\displaystyle \Pi (Q),}and∀n∈Z≥0∀B∈B(R≥0)Pr(Hn(f)∈B)=Exp⁡(−qYn,Yn)(B){\displaystyle \forall n\in \mathbb {Z} _{\geq 0}~\forall B\in {\cal {B}}(\mathbb {R} _{\geq 0})~\Pr(H_{n}(f)\in B)=\operatorname {Exp} (-q_{Y_{n},Y_{n}})(B)}(holding-time property). We sayX{\displaystyle X}isMarkov with initial distributionλ{\displaystyle \lambda }and rate matrixQ{\displaystyle Q}to mean: for alli∈S,{\displaystyle i\in S,}Pr(X(0)=i)=λi{\displaystyle \Pr(X(0)=i)=\lambda _{i}}and for alli,j{\displaystyle i,j}, for allt{\displaystyle t}and for small strictly positive values ofh{\displaystyle h}, the following holds for allt∈T{\displaystyle t\in T}such that0<Pr(X(t)=i){\displaystyle 0<\Pr(X(t)=i)}: where the term[i=j]{\displaystyle [i=j]}is1{\displaystyle 1}ifi=j{\displaystyle i=j}and otherwise0{\displaystyle 0}, and thelittle-o termo(h){\displaystyle o(h)}depends in a certain way oni,j,h{\displaystyle i,j,h}.[15][16] The above equation shows thatqi,j{\displaystyle q_{i,j}}can be seen as measuring how quickly the transition fromi{\displaystyle i}toj{\displaystyle j}happens fori≠j{\displaystyle i\neq j}, and how quickly the transition away fromi{\displaystyle i}happens fori=j{\displaystyle i=j}. Communicating classes, transience, recurrence and positive and null recurrence are defined identically as fordiscrete-time Markov chains. Write P(t) for the matrix with entriespij= P(Xt=j|X0=i). Then the matrix P(t) satisfies the forward equation, afirst-order differential equation where the prime denotes differentiation with respect tot. The solution to this equation is given by amatrix exponential In a simple case such as a CTMC on the state space {1,2}. The generalQmatrix for such a process is the following 2 × 2 matrix withα,β> 0 The above relation for forward matrix can be solved explicitly in this case to give Computing direct solutions is complicated in larger matrices. The fact thatQis the generator for asemigroupof matrices is used. The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values oft. Observe that for the two-state process considered earlier with P(t) given by ast→ ∞ the distribution tends to Observe that each row has the same distribution as this does not depend on starting state. The row vectorπmay be found by solving with the constraint The image to the right describes a continuous-time Markov chain with state-space {Bull market, Bear market, Stagnant market} andtransition-rate matrix The stationary distribution of this chain can be found by solvingπQ=0{\displaystyle \pi Q=0}, subject to the constraint that elements must sum to 1 to obtain The image to the right describes a discrete-time Markov chain modelingPac-Manwith state-space {1,2,3,4,5,6,7,8,9}. The player controls Pac-Man through a maze, eating pac-dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3-grid and the ghosts move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition-rate matrix: Q=(−1121214−114141412−11213−113131414−114141313−11312−112141414−1141212−1){\displaystyle Q={\begin{pmatrix}-1&{\frac {1}{2}}&&{\frac {1}{2}}\\{\frac {1}{4}}&-1&{\frac {1}{4}}&&{\frac {1}{4}}&&&{\frac {1}{4}}\\&{\frac {1}{2}}&-1&&&{\frac {1}{2}}\\{\frac {1}{3}}&&&-1&{\frac {1}{3}}&&{\frac {1}{3}}\\&{\frac {1}{4}}&&{\frac {1}{4}}&-1&{\frac {1}{4}}&&{\frac {1}{4}}\\&&{\frac {1}{3}}&&{\frac {1}{3}}&-1&&&{\frac {1}{3}}\\&&&{\frac {1}{2}}&&&-1&{\frac {1}{2}}\\&{\frac {1}{4}}&&&{\frac {1}{4}}&&{\frac {1}{4}}&-1&{\frac {1}{4}}\\&&&&&{\frac {1}{2}}&&{\frac {1}{2}}&-1\end{pmatrix}}} This Markov chain is irreducible, because the ghosts can fly from every state to every state in a finite amount of time. Due to the secret passageway, the Markov chain is also aperiodic, because the ghosts can move from any state to any state both in an even and in an uneven number of state transitions. Therefore, a unique stationary distribution exists and can be found by solvingπQ=0{\displaystyle \pi Q=0}, subject to the constraint that elements must sum to 1. The solution of this linear equation subject to the constraint isπ=(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)%.{\displaystyle \pi =(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)\%.}The central state and the border states 2 and 8 of the adjacent secret passageway are visited most and the corner states are visited least. For a CTMCXt, the time-reversed process is defined to beX^t=XT−t{\displaystyle {\hat {X}}_{t}=X_{T-t}}. ByKelly's lemmathis process has the same stationary distribution as the forward process. A chain is said to be reversible if the reversed process is the same as the forward process.Kolmogorov's criterionstates that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. One method of finding thestationary probability distribution,π, of anergodiccontinuous-time Markov chain,Q, is by first finding itsembedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain. Each element of the one-step transition probability matrix of the EMC,S, is denoted bysij, and represents theconditional probabilityof transitioning from stateiinto statej. These conditional probabilities may be found by From this,Smay be written as whereIis theidentity matrixand diag(Q) is thediagonal matrixformed by selecting themain diagonalfrom the matrixQand setting all other elements to zero. To find the stationary probability distribution vector, we must next findφ{\displaystyle \varphi }such that withφ{\displaystyle \varphi }being a row vector, such that all elements inφ{\displaystyle \varphi }are greater than 0 and‖φ‖1{\displaystyle \|\varphi \|_{1}}= 1. From this,πmay be found as (Smay be periodic, even ifQis not. Onceπis found, it must be normalized to aunit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observingX(t) at intervals of δ units of time. The random variablesX(0),X(δ),X(2δ), ... give the sequence of states visited by the δ-skeleton.
https://en.wikipedia.org/wiki/Continuous-time_Markov_process
Chu spacesgeneralize the notion oftopological spaceby dropping the requirements that the set ofopen setsbe closed underunionand finiteintersection, that the open sets be extensional, and that the membership predicate (of points in open sets) be two-valued. The definition ofcontinuous functionremains unchanged other than having to be worded carefully to continue to make sense after these generalizations. The name is due to Po-Hsiang Chu, who originally constructed a verification of autonomous categories as a graduate student under the direction ofMichael Barrin 1979.[1] Understood statically, a Chu space (A,r,X) over a setKconsists of a setAof points, a setXof states, and a functionr:A×X→K. This makes it anA×Xmatrixwith entries drawn fromK, or equivalently aK-valuedbinary relationbetweenAandX(ordinary binary relations being 2-valued). Understood dynamically, Chu spaces transform in the manner of topological spaces, withAas the set of points,Xas the set of open sets, andras the membership relation between them, whereKis the set of all possible degrees of membership of a point in an open set. The counterpart of a continuous function from (A,r,X) to (B,s,Y) is a pair (f,g) of functionsf:A→B,g:Y→Xsatisfying theadjointness conditions(f(a),y) =r(a,g(y)) for alla∈Aandy∈Y. That is,fmaps points forwards at the same time asgmaps states backwards. The adjointness condition makesgthe inverse image functionf−1, while the choice ofXfor thecodomainofgcorresponds to the requirement for continuous functions that the inverse image of open sets be open. Such a pair is called a Chu transform or morphism of Chu spaces. A topological space (X,T) whereXis the set of points andTthe set of open sets, can be understood as a Chu space (X,∈,T) over {0, 1}. That is, the points of the topological space become those of the Chu space while the open sets become states and the membership relation " ∈ " between points and open sets is made explicit in the Chu space. The condition that the set of open sets be closed under arbitrary (including empty) union and finite (including empty) intersection becomes the corresponding condition on the columns of the matrix. A continuous functionf:X→X'between two topological spaces becomes an adjoint pair (f,g) in whichfis now paired with a realization of the continuity condition constructed as an explicit witness functiongexhibiting the requisite open sets in the domain off. The category of Chu spaces overKand their maps is denoted byChu(Set,K). As is clear from the symmetry of the definitions, it is aself-dual category: it is equivalent (in fact isomorphic) to its dual, the category obtained by reversing all the maps. It is furthermore a*-autonomous categorywith dualizing object (K, λ, {*}) where λ :K× {*} →Kis defined by λ(k, *) =k(Barr 1979). As such it is a model ofJean-Yves Girard'slinear logic(Girard 1987). The more generalenriched categoryChu(V,k) originally appeared in an appendix to Barr (1979). The Chu space concept originated withMichael Barrand the details were developed by his student Po-Hsiang Chu, whose master's thesis formed the appendix. Ordinary Chu spaces arise as the caseV=Set, that is, when themonoidal categoryVis specialized to thecartesian closed categorySetof sets and their functions, but were not studied in their own right until more than a decade after the appearance of the more general enriched notion. A variant of Chu spaces, calleddialectica spaces, due tode Paiva (1989)replaces the map condition (1) with the map condition (2): The categoryTopof topological spaces and their continuous functions embeds inChu(Set, 2) in the sense that there exists a full and faithful functorF:Top→Chu(Set, 2) providing for each topological space (X,T) itsrepresentationF((X,T)) = (X, ∈,T) as noted above. This representation is moreover arealizationin the sense of Pultr andTrnková(1980), namely that the representing Chu space has the same set of points as the represented topological space and transforms in the same way via the same functions. Chu spaces are remarkable for the wide variety of familiar structures they realize. Lafont and Streicher (1991) point out that Chu spaces over 2 realize both topological spaces andcoherent spaces(introduced by J.-Y. Girard (1987) to model linear logic), while Chu spaces overKrealize any category of vector spaces over a field whose cardinality is at most that ofK. This was extended byVaughan Pratt(1995) to the realization ofk-ary relational structures by Chu spaces over 2k. For example, the categoryGrpof groups and their homomorphisms is realized byChu(Set,8) since the group multiplication can be organized as aternary relation.Chu(Set, 2) realizes a wide range of "logical" structures such as semilattices, distributive lattices, complete and completely distributive lattices, Boolean algebras, complete atomic Boolean algebras, etc. Further information on this and other aspects of Chu spaces, including their application to the modeling of concurrent behavior, may be found atChu Spaces. Chu spaces can serve as a model of concurrent computation inautomata theoryto express branching time and trueconcurrency. Chu spaces exhibit the quantum mechanical phenomena of complementarity and uncertainty. The complementarity arises as the duality of information and time, automata and schedules, and states and events. Uncertainty arises when a measurement is defined to be amorphismsuch that increasing structure in the observed object reduces the clarity of observation. This uncertainty can be calculated numerically from its form factor to yield the usualHeisenberg uncertaintyrelation. Chu spaces correspond towavefunctionsas vectors ofHilbert space.[2]
https://en.wikipedia.org/wiki/Chu_space
Cybersquatting(also known asdomain squatting) is the practice of registering, trafficking in, or using anInternet domain name, with abad faithintent to profit from thegoodwillof atrademarkbelonging to someone else. The term is derived from "squatting", which is the act of occupying an abandoned or unoccupied space or building that the squatter does not own, rent, or otherwise have permission to use. In popular terms, "cybersquatting" is the term most frequently used to describe the deliberate, bad faith abusive registration of a domain name in violation of trademark rights. However, precisely because of its popular currency, the term has different meanings to different people. Some people, for example, include "warehousing", or the practice of registering a collection of domain names corresponding to trademarks with the intention of selling the registrations to the owners of the trademarks, within the notion of cybersquatting, while others distinguish between the two terms.[1]In the former definition, the cybersquatter may offer to sell the domain to the person or company who owns a trademark contained within the name at aninflated price. Similarly, some consider "cyberpiracy" to be interchangeable with "cybersquatting", whereas others consider that the former term relates to violation of copyright in the content of websites, rather than to abusive domain name registrations.[1] Because of the various interpretations of the term, World Intellectual Property Organization (WIPO), in a 1999 report, approved by its member states, considered it as the abusive registration of a domain name.[2][3] Since 1999, theWorld Intellectual Property Organization(WIPO) has provided an administrative process wherein a trademark holder can attempt to claim a squatted site. Trademark owners in 2021 filed a record 5,128 cases under theUniform Domain-Name Dispute-Resolution Policy(UDRP) withWorld Intellectual Property Organization(WIPO)'s Arbitration and Mediation Center, eclipsing the 2020 level by 22%. The surge pushed WIPO cybersquatting cases to almost 56,000 and the total number of domain names covered past the 100,000 mark.[4]As a matter of comparison, in 2006, there were 1823 complaints filed with WIPO, which was a 25% increase over the 2005 rate.[5] The accelerating growth in cybersquatting cases filed with the WIPO Center has been largely attributed by the WIPO Center[6]to trademark owners reinforcing their online presence to offer authentic content and trusted sales outlets, with a greater number of people spending more time online, especially during theCOVID-19 pandemic. Representing 70% of WIPO'sGeneric top-level domain(gTLD) cases,.comdemonstrated its continuing primacy. WIPO UDRP cases in 2021 involved parties from 132 countries. The top three business areas were Banking and Finance (13%), Internet and IT (13%), and Biotechnology and Pharmaceuticals (11%).[7]The U.S., with 1,760 cases filed, France (938), the U.K. (450), Switzerland (326), and Germany (251) were the top five filing countries.[8] In 2007 it was stated that 84% of the claims made since 1999 were decided in the complaining party's favor.[5] Some countries have specific laws against cybersquatting beyond the normal rules oftrademarklaw. For example, according to theUnited States federal lawknown as theAnticybersquatting Consumer Protection Act(ACPA), cybersquatting is registering, trafficking in, or using anInternet domain namewithbad faithintent to profit from the goodwill of atrademarkbelonging to someone else. TheUnited Statesadopted the U.S. Anticybersquatting Consumer Protection Act in 1999. This expansion of theLanham (Trademark) Act(15 U.S.C.) is intended to provide protection against cybersquatting for individuals as well as owners of distinctive trademarked names. However, some notable personalities, including actorKevin Spacey, failed to obtain control of their names on the internet because the US ACPA considers ownership of a website name "fair use" for which no permission is needed, unless there is an attempt to profit from the domain name by putting it up for sale.[9] Jurisdiction is an issue, as shown in the case involving Kevin Spacey, in which JudgeGary A. Feess, of the United States District Court of the Central District of California, ruled that the actor would have to file a complaint in a Canadian court, where the current owner of kevinspacey.com resided. Spacey later won the domain throughFORUM(formerly known as the National Arbitration Forum). In relation to cybersquatting, theSpanish Supreme Courtissued the first sentence on this practice, relating it to the crime of misappropriation (STS 358/2022, of April 7). An unprecedented fact that established the legal fit of this computer crime in Spanish jurisprudence. The case revolves around four members of the religious association Alpha Education for Comprehensive Health. They created a web page (the Internet domain of which was www.alfatelevision.org) and opened a bank and PayPal account for donations made to the association. Sometime later, there were some disagreements between the members of the association and the four defendants who opened a new website, changed theinternet domainand the passwords of the accounts, which redirected all the donations from the followers. Later, the association dismissed the four members. The association's general secretary denounced the four members for a crime of misappropriation, and they were sentenced by the Provincial Court of Guadalajara, understanding that the internet domain was an asset of the association. This resolution was appealed to theSupreme Courtthrough anappeal, which was upheld by the court. Finally, the Supreme Court acquitted the four accused, understanding that the proven facts did not fit the crime of misappropriation. In this sense, it highlights that there are elements that did not concur in this case and that the actions carried out by these individuals (creation of another domain, change of passwords...) occurred prior to their termination and that, therefore, they were in willingness to do it. In addition, the sentence reflects cases in which cybersquatting could have criminal relevance. In the first place, if the conduct sought to harm the rights of a brand, it could constitute acrime against industrialorintellectual property. Secondly, if the intention was to use thedomain namein a deceitful way to cause an error in the transfer of assets, the accused could face a crime offraud. Finally, if cybersquatting were used to attack a domain name, the accused would be facing a crime of computer sabotage.[10] World Intellectual Property Organization France United Kingdom United States India With the rise of social media websites such asFacebookandTwitter, a new form of cybersquatting involves registering trademark-protected brands or names of public figures on popular social media websites. Such cases may be referred to as "username squatting". On June 5, 2009,Tony La Russa, the manager of the St. Louis Cardinals, filed a complaint against Twitter, accusing Twitter of cybersquatting.[22]The dispute centered on a Twitter profile that used La Russa's name, had a picture of La Russa, and had a headline that said "Hey there! Tony La Russa is now using Twitter." The profile encouraged users to "join today to start receiving Tony La Russa's updates." According to La Russa, the status updates were vulgar and derogatory. La Russa argued that the author of the profile intended, in bad faith, to divert Internet traffic away from La Russa's website and make a profit from the injury to La Russa's mark.[22]On June 26, 2009, La Russa filed a notice of voluntary dismissal after the parties settled the case.[23] Social networking websites have attempted to curb cybersquatting, making cybersquatting a violation of their terms of service. Twitter's name squatting policy forbids cybersquatting similar to that seen in many domain name disputes, such as "username for sale" accounts: "Attempts to sell or extort other forms of payment in exchange for usernames will result in account suspension."[24]Additionally, Twitter has an "Impersonation Policy" that forbids non-parody impersonation. An account may be guilty of impersonation if it confuses or misleads others; "accounts with the clear intent to confuse or mislead may be permanently suspended." Twitter's standard for defining parody is whether a reasonable person would be aware that the fake profile is a joke.[25] Soon after the La Russa suit was filed, Twitter took another step to prevent "identity confusion" caused by squatting by unveilingTwitter verification.[26]Usernames stamped with the "verified account" insignia is intended to indicate that the accounts are real and authentic. However, after theacquisition of Twitter by Elon Muskthe verification system was changed to make it easier for individuals to get verified through the Twitter Blue program,[27]giving accounts "Profile Labels" instead – identifying ownership information such as whether the account is an individual, business, or a government.[28] Facebook reserves the right to reclaim usernames on the website if they infringe on a trademark.[29]Trademark owners are responsible for reporting any trademark infringement on a username infringement form Facebook provides. Furthermore, Facebook usernames require "mobile phone authentication".[29]In order to obtain a username, the individual needs to verify the account by phone. This article incorporates text from afree contentwork. Licensed under CC-BY-4.0. Text taken from2021 WIPO's Global Intellectual Property Filing Services​, WIPO.
https://en.wikipedia.org/wiki/Lapsed_lurker
ITIL(previously and also known asInformation Technology Infrastructure Library)is a framework with a set of practices (previously processes) for IT activities such asIT service management(ITSM) andIT asset management(ITAM) that focus on aligning IT services with the needs of the business.[1] ITIL describes best practices, including processes, procedures, tasks, and checklists which are neither organization-specific nor technology-specific. It is designed to allow organizations to establish a baseline and can be used to demonstrate compliance and to measure improvements. There is no formal independent third-party compliance assessment available to demonstrate ITIL compliance in an organization. Certification in ITIL is only available to individuals and not organizations. Since 2021, the ITILtrademarkhas been owned by PeopleCert.[2] Responding to growing dependence on IT, the UK Government'sCentral Computer and Telecommunications Agency(CCTA) in the 1980s developed a set of recommendations designed to standardize IT management practices across government functions, built around aprocess model-based view of controlling and managing operations often credited toW. Edwards Demingand hisplan-do-check-act (PDCA)cycle.[3] ITIL 4 contains seven guiding principles: ITIL 4 consists of 34 practices grouped into 3 categories: ITIL 4 certification can be obtained by different roles in IT management. Certification starts with ITIL 4 Foundation, followed by one of two branches:[11]
https://en.wikipedia.org/wiki/Infrastructure_Management_Services
Anomological network(ornomological net[1]) is a representation of theconcepts(constructs) of interest in a study, their observable manifestations, and the interrelationships between these. The term "nomological" derives from theGreek, meaning "lawful", or inphilosophy of scienceterms, "law-like". It wasCronbachandMeehl's view ofconstruct validitythat in order to provideevidencethat ameasurehas construct validity, a nomological network must be developed for its measure.[2] The necessary elements of a nomological network are: Validity evidence based onnomological validityis a general form ofconstruct validity. It is the degree to which a construct behaves as it should within a system of related constructs (the nomological network).[3] Nomological networks are used in theory development and use amodernist[clarification needed]approach.[4] Thispsychology-related article is astub. You can help Wikipedia byexpanding it. Thissociology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Nomological_network
Incomputing,syslog(/ˈsɪslɒɡ/) is a standard formessage logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the type of system generating the message, and is assigned a severity level. Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems. When operating over a network, syslog uses aclient-serverarchitecture where asyslog serverlistens for and logs messages coming from clients. Syslog was developed in the 1980s byEric Allmanas part of theSendmailproject.[1]It was readily adopted by other applications and has since become the standard logging solution onUnix-likesystems.[2]A variety of implementations also exist on other operating systems and it is commonly found in network devices, such asrouters.[3] Syslog originally functioned as ade facto standard, without any authoritative published specification, and many implementations existed, some of which were incompatible. TheInternet Engineering Task Forcedocumented the status quo in RFC 3164 in August 2001. It was standardized by RFC 5424 in March 2009.[4] Various companies have attempted to claim patents for specific aspects of syslog implementations.[5][6]This has had little effect on the use and standardization of the protocol.[citation needed] The information provided by the originator of a syslog message includes the facility code and the severity level. The syslog software adds information to the information header before passing the entry to the syslog receiver. Such components include an originator process ID, atimestamp, and the hostname orIP addressof the device. A facility code is used to specify the type of system that is logging the message. Messages with different facilities may be handled differently.[7]The list of facilities available is described by the standard:[4]: 9 The mapping between facility code and keyword is not uniform in different operating systems and syslog implementations.[8] The list of severities of issues is also described by the standard:[4]: 10 The meaning of severity levels other thanEmergencyandDebugare relative to the application. For example, if the purpose of the system is to process transactions to update customer account balance information, an error in the final step should be assignedAlertlevel. However, an error occurring in an attempt to display theZIP codeof the customer may be assignedErroror evenWarninglevel. The server process that handles display of messages usually includes all lower (more severe) levels when the display of less severe levels is requested. That is, if messages are separated by individual severity, aWarninglevel entry will also be included when filtering forNotice,InfoandDebugmessages.[12] In RFC 3164, the message component (known as MSG) was specified as having these fields:TAG, which should be the name of the program or process that generated the message, andCONTENTwhich contains the details of the message. Described in RFC 5424,[4]"MSG is what was called CONTENT in RFC 3164. The TAG is now part of the header, but not as a single field. The TAG has been split into APP-NAME, PROCID, and MSGID. This does not totally resemble the usage of TAG, but provides the same functionality for most of the cases." Popular syslog tools such asNXLog,Rsyslogconform to this new standard. The content field should be encoded in aUTF-8character set and octet values in the traditionalASCII control character rangeshould be avoided.[13][4] Generated log messages may be directed to various destinations includingconsole, files, remote syslog servers, or relays. Most implementations provide a command line utility, often calledlogger, as well as asoftware library, to send messages to the log.[14] To display and monitor the collected logs one needs to use a client application or access the log file directly on the system. The basic command line tools aretailandgrep. The log servers can be configured to send the logs over the network (in addition to the local files). Some implementations include reporting programs for filtering and displaying of syslog messages. When operating over a network, syslog uses aclient-serverarchitecture where the server listens on awell-knownorregistered portfor protocol requests from clients. Historically the most common transport layer protocol for network logging has beenUser Datagram Protocol(UDP), with the server listening on port 514.[15]Because UDP lacks congestion control mechanisms,Transmission Control Protocol(TCP) port 6514 is used;Transport Layer Securityis also required in implementations and recommended for general use.[16][17] Since each process, application, and operating system was written independently, there is little uniformity to the payload of the log message. For this reason, no assumption is made about its formatting or contents. A syslog message is formatted (RFC 5424 gives theAugmented Backus–Naur form(ABNF) definition), but its MSG field is not. The network protocol issimplex communication, with no means of acknowledging the delivery to the originator. Various groups are working on draft standards detailing the use of syslog for more than just network and security event logging, such as its proposed application within the healthcare environment.[18] Regulations, such as theSarbanes–Oxley Act,PCI DSS,HIPAA, and many others, require organizations to implement comprehensive security measures, which often include collecting and analyzing logs from many different sources. The syslog format has proven effective in consolidating logs, as there are many open-source and proprietary tools for reporting and analysis of these logs. Utilities exist for conversion fromWindows Event Logand other log formats to syslog. Managed Security Service Providersattempt to apply analytical techniques and artificial intelligence algorithms to detect patterns and alert customers to problems.[19] The Syslog protocol is defined byRequest for Comments(RFC) documents published by theInternet Engineering Task Force(Internet standards). The following is a list of RFCs that define the syslog protocol:[20]
https://en.wikipedia.org/wiki/Syslog
Incomputer science,functional programmingis aprogramming paradigmwhere programs are constructed byapplyingandcomposingfunctions. It is adeclarative programmingparadigm in which function definitions aretreesofexpressionsthat mapvaluesto other values, rather than a sequence ofimperativestatementswhich update therunning stateof the program. In functional programming, functions are treated asfirst-class citizens, meaning that they can be bound to names (including localidentifiers), passed asarguments, andreturnedfrom other functions, just as any otherdata typecan. This allows programs to be written in adeclarativeandcomposablestyle, where small functions are combined in amodularmanner. Functional programming is sometimes treated as synonymous withpurely functional programming, a subset of functional programming that treats all functions asdeterministicmathematicalfunctions, orpure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutablestateor otherside effects. This is in contrast with impureprocedures, common inimperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewerbugs, be easier todebugandtest, and be more suited toformal verification.[1][2] Functional programming has its roots inacademia, evolving from thelambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, includingCommon Lisp,Scheme,[3][4][5][6]Clojure,Wolfram Language,[7][8]Racket,[9]Erlang,[10][11][12]Elixir,[13]OCaml,[14][15]Haskell,[16][17]andF#.[18][19]Leanis a functional programming language commonly used for verifying mathematical theorems.[20]Functional programming is also key to some languages that have found success in specific domains, likeJavaScriptin the Web,[21]Rin statistics,[22][23]J,KandQin financial analysis, andXQuery/XSLTforXML.[24][25]Domain-specific declarative languages likeSQLandLex/Yaccuse some elements of functional programming, such as not allowingmutable values.[26]In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such asC++11,C#,[27]Kotlin,[28]Perl,[29]PHP,[30]Python,[31]Go,[32]Rust,[33]Raku,[34]Scala,[35]andJava (since Java 8).[36] Thelambda calculus, developed in the 1930s byAlonzo Church, is aformal systemofcomputationbuilt fromfunction application. In 1937Alan Turingproved that the lambda calculus andTuring machinesare equivalent models of computation,[37]showing that the lambda calculus isTuring complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation,combinatory logic, was developed byMoses SchönfinkelandHaskell Curryin the 1920s and 1930s.[38] Church later developed a weaker system, thesimply typed lambda calculus, which extended the lambda calculus by assigning adata typeto all terms.[39]This forms the basis for statically typed functional programming. The firsthigh-levelfunctional programming language,Lisp, was developed in the late 1950s for theIBM 700/7000 seriesof scientific computers byJohn McCarthywhile atMassachusetts Institute of Technology(MIT).[40]Lisp functions were defined using Church's lambda notation, extended with a label construct to allowrecursivefunctions.[41]Lisp first introduced many paradigmatic features of functional programming, though early Lisps weremulti-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such asSchemeandClojure, and offshoots such asDylanandJulia, sought to simplify and rationalise Lisp around a cleanly functional core, whileCommon Lispwas designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.[42] Information Processing Language(IPL), 1956, is sometimes cited as the first computer-based functional programming language.[43]It is anassembly-style languagefor manipulating lists of symbols. It does have a notion ofgenerator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. IversondevelopedAPLin the early 1960s, described in his 1962 bookA Programming Language(ISBN9780471430148). APL was the primary influence onJohn Backus'sFP. In the early 1990s, Iverson andRoger HuicreatedJ. In the mid-1990s,Arthur Whitney, who had previously worked with Iverson, createdK, which is used commercially in financial industries along with its descendantQ. In the mid-1960s,Peter LandininventedSECD machine,[44]the firstabstract machinefor a functional programming language,[45]described a correspondence betweenALGOL 60and thelambda calculus,[46][47]and proposed theISWIMprogramming language.[48] John BackuspresentedFPin his 1977Turing Awardlecture "Can Programming Be Liberated From thevon NeumannStyle? A Functional Style and its Algebra of Programs".[49]He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow theprinciple of compositionality.[citation needed]Backus's paper popularized research into functional programming, though it emphasizedfunction-level programmingrather than the lambda-calculus style now associated with functional programming. The 1973 languageMLwas created byRobin Milnerat theUniversity of Edinburgh, andDavid Turnerdeveloped the languageSASLat theUniversity of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional languageNPL.[50]NPL was based onKleene Recursion Equationsand was first introduced in their work on program transformation.[51]Burstall, MacQueen and Sannella then incorporated thepolymorphictype checking from ML to produce the languageHope.[52]ML eventually developed into several dialects, the most common of which are nowOCamlandStandard ML. In the 1970s,Guy L. SteeleandGerald Jay SussmandevelopedScheme, as described in theLambda Papersand the 1985 textbookStructure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to uselexical scopingand to requiretail-call optimization, features that encourage functional programming. In the 1980s,Per Martin-Löfdevelopedintuitionistic type theory(also calledconstructivetype theory), which associated functional programs withconstructive proofsexpressed asdependent types. This led to new approaches tointeractive theorem provingand has influenced the development of subsequent functional programming languages.[citation needed] The lazy functional language,Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence onHaskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form anopen standardfor functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametricCADin theOpenSCADlanguage built on theCGALframework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept.[53] Functional programming continues to be used in commercial settings.[54][55][56] A number of concepts[57]and paradigms are specific to functional programming, and generally foreign toimperative programming(includingobject-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts.[58] Higher-order functionsare functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is thedifferential operatord/dx{\displaystyle d/dx}, which returns thederivativeof a functionf{\displaystyle f}. Higher-order functions are closely related tofirst-class functionsin that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enablepartial applicationorcurrying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, thesuccessor functionas the addition operator partially applied to thenatural numberone. Pure functions(or expressions) have noside effects(memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such asgcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations.Fortran 95also lets functions be designatedpure.[59]C++11 addedconstexprkeyword with similar semantics. Iteration(looping) in functional languages is usually accomplished viarecursion.Recursive functionsinvoke themselves, letting an operation be repeated until it reaches thebase case. In general, recursion requires maintaining astack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known astail recursioncan be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program intocontinuation passing styleduring compiling, among other approaches. TheSchemelanguage standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls.[60][61]Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space.[62]Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example,Chickenintentionally maintains a stack and lets thestack overflow. However, when this happens, itsgarbage collectorwill claim space back,[63]allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, withcatamorphismsandanamorphisms(or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such asloopsinimperative languages. Most general purpose functional programming languages allow unrestricted recursion and areTuring complete, which makes thehalting problemundecidable, can cause unsoundness ofequational reasoning, and generally requires the introduction ofinconsistencyinto the logic expressed by the language'stype system. Some special purpose languages such asCoqallow onlywell-foundedrecursion and arestrongly normalizing(nonterminating computations can be expressed only with infinite streams of values calledcodata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is calledtotal functional programming.[64] Functional languages can be categorized by whether they usestrict (eager)ornon-strict (lazy)evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in thedenotational semanticsof expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages isgraph reduction.[65]Lazy evaluation is used by default in several pure functional languages, includingMiranda,Clean, andHaskell. Hughes 1984argues for lazy evaluation as a mechanism for improving program modularity throughseparation of concerns, by easing independent implementation of producers and consumers of data streams.[2]Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes anoperational semanticsto aid in such analysis.[66]Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.[67] Especially since the development ofHindley–Milner type inferencein the 1970s, functional programming languages have tended to usetyped lambda calculus, rejecting all invalid programs at compilation time and riskingfalse positive errors, as opposed to theuntyped lambda calculus, that accepts all valid programs at compilation time and risksfalse negative errors, used in Lisp and its variants (such asScheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use ofalgebraic data typesmakes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques liketest-driven development, whiletype inferencefrees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such asCoq,Agda,Cayenne, andEpigramare based onintuitionistic type theory, which lets types depend on terms. Such types are calleddependent types. These type systems do not have decidable type inference and are difficult to understand and program with.[68][69][70][71]But dependent types can express arbitrary propositions inhigher-order logic. Through theCurry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formalmathematical proofsfrom which a compiler can generatecertified code. While these languages are mainly of interest in academic research (including informalized mathematics), they have begun to be used in engineering as well.Compcertis acompilerfor a subset of the languageCthat is written in Coq and formally verified.[72] A limited form of dependent types calledgeneralized algebraic data types(GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience.[73]GADT's are available in theGlasgow Haskell Compiler, inOCaml[74]and inScala,[75]and have been proposed as additions to other languages including Java and C#.[76] Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.[77] ConsiderCassignment statementx=x*10, this changes the value assigned to the variablex. Let us say that the initial value ofxwas1, then two consecutive evaluations of the variablexyields10and100respectively. Clearly, replacingx=x*10with either10or100gives a program a different meaning, and so the expressionis notreferentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such asintplusone(intx){returnx+1;}istransparent, as it does not implicitly change the input x and thus has no suchside effects. Functional programs exclusively use this type of function and are therefore referentially transparent. Purely functionaldata structuresare often represented in a different way to theirimperativecounterparts.[78]For example, thearraywith constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as thehash tableandbinary heap, are based on arrays. Arrays can be replaced bymapsor random access lists, which admit purely functional implementation, but havelogarithmicaccess and update times. Purely functional data structures havepersistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.[79] Functional programming is very different fromimperative programming. The most significant differences stem from the fact that functional programming avoidsside effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. The following two examples (written inJavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variableresult. Traditional imperative loop: Functional programming with higher-order functions: Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such asoff-by-one errors(seeGreenspun's tenth rule). There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming languageHaskellimplements them usingmonads, derived fromcategory theory.[80]Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).[81] Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.[82] Impure functional languages usually include a more direct method of managing mutable state.Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.[citation needed] Alternative methods such asHoare logicanduniquenesshave been developed to track side effects in programs. Some modern research languages useeffect systemsto make the presence of side effects explicit.[83] Functional programming languages are typically less efficient in their use ofCPUand memory than imperative languages such asCandPascal.[84]This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complexpointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree).[85]However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such asOCamlandCleanare only slightly slower than C according toThe Computer Language Benchmarks Game.[86]For programs that handle largematricesand multidimensionaldatabases,arrayfunctional languages (such asJandK) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities forinline expansion.[87]Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, likeClojuresolve this issue by implementing mechanisms for safe memory sharing betweenformallyimmutabledata.[88]Rustdistinguishes itself by its approach to data immutability which involves immutablereferences[89]and a concept calledlifetimes.[90] Immutable data with separation of identity and state andshared-nothingschemes can also potentially be more well-suited forconcurrent and parallelprogramming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usuallyatomicand this allows eliminating the need for locks. This is how for examplejava.util.concurrentclasses are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use.[91]Functional programming languages often have a concurrency model that instead of shared state and synchronization, leveragesmessage passingmechanisms (such as theactor model, where each actor is a container for state, behavior, child actors and a message queue).[92][93]This approach is common inErlang/ElixirorAkka. Lazy evaluationmay also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introducememory leaksif used improperly). Launchbury 1993[66]discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivanet al.2008[94]give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles)[citation needed]. Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number inClojure: Whenbenchmarkedusing theCriteriumtool on aRyzen 7900XGNU/Linux PC in aLeiningenREPL2.11.2, running onJava VMversion 22 and Clojure version 1.11.1, the first implementation, which is implemented as: has the mean execution time of 4.76 ms, while the second one, in which.equalsis a direct invocation of the underlyingJavamethod, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation ofeven?. For instance thelo libraryforGo, which implements various higher-order functions common in functional programming languages usinggenerics. In a benchmark provided by the library's author, callingmapis 4% slower than an equivalentforloop and has the sameallocationprofile,[95]which can be attributed to various compiler optimizations, such asinlining.[96] One distinguishing feature ofRustarezero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler usingloop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standaloneAssemblyinstruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elementswill be stored in specific CPU registers, allowing forconstant-time accessat runtime.[97] It is possible to use a functional style of programming in languages that are not traditionally considered functional languages.[98]For example, bothD[99]andFortran 95[59]explicitly support pure functions. JavaScript,Lua,[100]PythonandGo[101]hadfirst class functionsfrom their inception.[102]Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2,[103]though Python 3 relegated "reduce" to thefunctoolsstandard library module.[104]First-class functions have been introduced into other mainstream languages such asPerl5.0 in 1994,PHP5.3,Visual Basic 9,C#3.0,C++11, andKotlin.[28][citation needed] In Perl,lambda,map,reduce,filter, andclosuresare fully supported and frequently used. The bookHigher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming. In PHP,anonymous classes,closuresand lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. InJava, anonymous classes can sometimes be used to simulate closures;[105]however, anonymous classes are not always proper replacements to closures because they have more limited capabilities.[106]Java 8 supports lambda expressions as a replacement for some anonymous classes.[107] InC#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Manyobject-orienteddesign patternsare expressible in functional programming terms: for example, thestrategy patternsimply dictates use of a higher-order function, and thevisitorpattern roughly corresponds to acatamorphism, orfold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages,[108]for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.[109] Logic programmingcan be viewed as a generalisation of functional programming, in which functions are a special case of relations.[110]For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: The program can be queried, like a functional program, to generate mothers from children: But it can also be queriedbackwards, to generate children: It can even be used to generate all instances of the mother relation: Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: The same definition in relational notation needs to be written in the unnested form: Here:-meansifand,meansand. However, the difference between the two representations is simply syntactic. InCiaoProlog, relations can be nested, like functions in functional programming:[111] Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. Emacs, a highly extensible text editor family uses its ownLisp dialectfor writing plugins. The original author of the most popular Emacs implementation,GNU Emacsand Emacs Lisp,Richard Stallmanconsiders Lisp one of his favorite programming languages.[112] Helix, since version 24.03 supports previewingASTasS-expressions, which are also the core feature of the Lisp programming language family.[113] Spreadsheetscan be considered a form of pure,zeroth-order, strict-evaluation functional programming system.[114]However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.[115] Due to theircomposability, functional programming paradigms can be suitable formicroservices-based architectures.[116] Functional programming is an active area of research in the field ofprogramming language theory. There are severalpeer-reviewedpublication venues focusing on functional programming, including theInternational Conference on Functional Programming, theJournal of Functional Programming, and theSymposium on Trends in Functional Programming. Functional programming has been employed in a wide range of industrial applications. For example,Erlang, which was developed by theSwedishcompanyEricssonin the late 1980s, was originally used to implementfault-toleranttelecommunicationssystems,[11]but has since become popular for building a range of applications at companies such asNortel,Facebook,Électricité de FranceandWhatsApp.[10][12][117][118][119]Scheme, a dialect ofLisp, was used as the basis for several applications on earlyApple Macintoshcomputers[3][4]and has been applied to problems such as training-simulation software[5]andtelescopecontrol.[6]OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis,[14]driververification, industrialrobotprogramming and static analysis ofembedded software.[15]Haskell, though initially intended as a research language,[17]has also been applied in areas such as aerospace systems, hardware design and web programming.[16][17] Other functional programming languages that have seen use in industry includeScala,[120]F#,[18][19]Wolfram Language,[7]Lisp,[121]Standard ML[122][123]and Clojure.[124]Scala has been widely used inData science,[125]whileClojureScript,[126]Elm[127]orPureScript[128]are some of the functional frontend programming languages used in production.Elixir's Phoenix framework is also used by some relatively popular commercial projects, such asFont AwesomeorAllegro(one of the biggest e-commerce platforms in Poland)[129]'s classified ads platformAllegro Lokalnie.[130] Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner toGröbner basisoptimizations but also for regulatory frameworks such asComprehensive Capital Analysis and Review. Given the use of OCaml andCamlvariations in finance, these systems are sometimes considered related to acategorical abstract machine. Functional programming is heavily influenced bycategory theory.[citation needed] Manyuniversitiesteach functional programming.[131][132][133][134]Some treat it as an introductory programming concept[134]while others first teach imperative programming methods.[133][135] Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts.[136]It has also been used to teach classical mechanics, as in the bookStructure and Interpretation of Classical Mechanics. In particular,Schemehas been a relatively popular choice for teaching programming for years.[137][138]
https://en.wikipedia.org/wiki/Functional_programming
This is alist of misnamed theoremsinmathematics. It includestheorems(andlemmas, corollaries,conjectures, laws, and perhaps even the odd object) that are well known in mathematics, but which are not named for the originator. That is, these items on this list illustrateStigler's law of eponymy(which is not, of course, due toStephen Stigler, who creditsRobert K Merton).
https://en.wikipedia.org/wiki/List_of_misnamed_theorems
Intopologyand related areas ofmathematics, thequotient spaceof atopological spaceunder a givenequivalence relationis a new topological space constructed by endowing thequotient setof the original topological space with thequotient topology, that is, with thefinest topologythat makescontinuousthecanonical projection map(the function that maps points to theirequivalence classes). In other words, a subset of a quotient space isopenif and only if itspreimageunder the canonical projection map is open in the original topological space. Intuitively speaking, the points of each equivalence class areidentifiedor "glued together" for forming a new topological space. For example, identifying the points of aspherethat belong to the samediameterproduces theprojective planeas a quotient space. LetX{\displaystyle X}be atopological space, and let∼{\displaystyle \sim }be anequivalence relationonX.{\displaystyle X.}Thequotient setY=X/∼{\displaystyle Y=X/{\sim }}is the set ofequivalence classesof elements ofX.{\displaystyle X.}The equivalence class ofx∈X{\displaystyle x\in X}is denoted[x].{\displaystyle [x].} The construction ofY{\displaystyle Y}defines a canonicalsurjectionq:X∋x↦[x]∈Y.{\textstyle q:X\ni x\mapsto [x]\in Y.}As discussed below,q{\displaystyle q}is a quotient mapping, commonly called the canonical quotient map, or canonical projection map, associated toX/∼.{\displaystyle X/{\sim }.} Thequotient spaceunder∼{\displaystyle \sim }is the setY{\displaystyle Y}equipped with thequotient topology, whoseopen setsare thosesubsetsU⊆Y{\textstyle U\subseteq Y}whosepreimageq−1(U){\displaystyle q^{-1}(U)}isopen. In other words,U{\displaystyle U}is open in the quotient topology onX/∼{\displaystyle X/{\sim }}if and only if{x∈X:[x]∈U}{\textstyle \{x\in X:[x]\in U\}}is open inX.{\displaystyle X.}Similarly, a subsetS⊆Y{\displaystyle S\subseteq Y}isclosedif and only if{x∈X:[x]∈S}{\displaystyle \{x\in X:[x]\in S\}}is closed inX.{\displaystyle X.} The quotient topology is thefinal topologyon the quotient set, with respect to the mapx↦[x].{\displaystyle x\mapsto [x].} A mapf:X→Y{\displaystyle f:X\to Y}is aquotient map(sometimes called anidentification map[1]) if it issurjectiveandY{\displaystyle Y}is equipped with thefinal topologyinduced byf.{\displaystyle f.}The latter condition admits two more-elementary formulations: a subsetV⊆Y{\displaystyle V\subseteq Y}is open (closed) if and only iff−1(V){\displaystyle f^{-1}(V)}is open (resp. closed). Every quotient map is continuous but not every continuous map is a quotient map. Saturated sets A subsetS{\displaystyle S}ofX{\displaystyle X}is calledsaturated(with respect tof{\displaystyle f}) if it is of the formS=f−1(T){\displaystyle S=f^{-1}(T)}for some setT,{\displaystyle T,}which is true if and only iff−1(f(S))=S.{\displaystyle f^{-1}(f(S))=S.}The assignmentT↦f−1(T){\displaystyle T\mapsto f^{-1}(T)}establishes aone-to-one correspondence(whose inverse isS↦f(S){\displaystyle S\mapsto f(S)}) between subsetsT{\displaystyle T}ofY=f(X){\displaystyle Y=f(X)}and saturated subsets ofX.{\displaystyle X.}With this terminology, a surjectionf:X→Y{\displaystyle f:X\to Y}is a quotient map if and only if for everysaturatedsubsetS{\displaystyle S}ofX,{\displaystyle X,}S{\displaystyle S}is open inX{\displaystyle X}if and only iff(S){\displaystyle f(S)}is open inY.{\displaystyle Y.}In particular, open subsets ofX{\displaystyle X}that arenotsaturated have no impact on whether the functionf{\displaystyle f}is a quotient map (or, indeed, continuous: a functionf:X→Y{\displaystyle f:X\to Y}is continuous if and only if, for every saturatedS⊆X{\textstyle S\subseteq X}such thatf(S){\displaystyle f(S)}is open inf(X){\textstyle f(X)},the setS{\displaystyle S}is open inX{\textstyle X}). Indeed, ifτ{\displaystyle \tau }is atopologyonX{\displaystyle X}andf:X→Y{\displaystyle f:X\to Y}is any map, then the setτf{\displaystyle \tau _{f}}of allU∈τ{\displaystyle U\in \tau }that are saturated subsets ofX{\displaystyle X}forms a topology onX.{\displaystyle X.}IfY{\displaystyle Y}is also a topological space thenf:(X,τ)→Y{\displaystyle f:(X,\tau )\to Y}is a quotient map (respectively,continuous) if and only if the same is true off:(X,τf)→Y.{\displaystyle f:\left(X,\tau _{f}\right)\to Y.} Quotient space of fibers characterization Given anequivalence relation∼{\displaystyle \,\sim \,}onX,{\displaystyle X,}denote theequivalence classof a pointx∈X{\displaystyle x\in X}by[x]:={z∈X:z∼x}{\displaystyle [x]:=\{z\in X:z\sim x\}}and letX/∼:={[x]:x∈X}{\displaystyle X/{\sim }:=\{[x]:x\in X\}}denote the set of equivalence classes. The mapq:X→X/∼{\displaystyle q:X\to X/{\sim }}that sends points to theirequivalence classes(that is, it is defined byq(x):=[x]{\displaystyle q(x):=[x]}for everyx∈X{\displaystyle x\in X}) is calledthe canonical map. It is asurjective mapand for alla,b∈X,{\displaystyle a,b\in X,}a∼b{\displaystyle a\,\sim \,b}if and only ifq(a)=q(b);{\displaystyle q(a)=q(b);}consequently,q(x)=q−1(q(x)){\displaystyle q(x)=q^{-1}(q(x))}for allx∈X.{\displaystyle x\in X.}In particular, this shows that the set of equivalence classX/∼{\displaystyle X/{\sim }}is exactly the set of fibers of the canonical mapq.{\displaystyle q.}IfX{\displaystyle X}is a topological space then givingX/∼{\displaystyle X/{\sim }}the quotient topology induced byq{\displaystyle q}will make it into a quotient space and makeq:X→X/∼{\displaystyle q:X\to X/{\sim }}into a quotient map.Up toahomeomorphism, this construction is representative of all quotient spaces; the precise meaning of this is now explained. Letf:X→Y{\displaystyle f:X\to Y}be a surjection between topological spaces (not yet assumed to be continuous or a quotient map) and declare for alla,b∈X{\displaystyle a,b\in X}thata∼b{\displaystyle a\,\sim \,b}if and only iff(a)=f(b).{\displaystyle f(a)=f(b).}Then∼{\displaystyle \,\sim \,}is an equivalence relation onX{\displaystyle X}such that for everyx∈X,{\displaystyle x\in X,}[x]=f−1(f(x)),{\displaystyle [x]=f^{-1}(f(x)),}which implies thatf([x]){\displaystyle f([x])}(defined byf([x])={f(z):z∈[x]}{\displaystyle f([x])=\{\,f(z)\,:z\in [x]\}}) is asingleton set; denote the unique element inf([x]){\displaystyle f([x])}byf^([x]){\displaystyle {\hat {f}}([x])}(so by definition,f([x])={f^([x])}{\displaystyle f([x])=\{\,{\hat {f}}([x])\,\}}). The assignment[x]↦f^([x]){\displaystyle [x]\mapsto {\hat {f}}([x])}defines abijectionf^:X/∼→Y{\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y}between the fibers off{\displaystyle f}and points inY.{\displaystyle Y.}Define the mapq:X→X/∼{\displaystyle q:X\to X/{\sim }}as above (byq(x):=[x]{\displaystyle q(x):=[x]}) and giveX/∼{\displaystyle X/{\sim }}the quotient topology induced byq{\displaystyle q}(which makesq{\displaystyle q}a quotient map). These maps are related by:f=f^∘qandq=f^−1∘f.{\displaystyle f={\hat {f}}\circ q\quad {\text{ and }}\quad q={\hat {f}}^{-1}\circ f.}From this and the fact thatq:X→X/∼{\displaystyle q:X\to X/{\sim }}is a quotient map, it follows thatf:X→Y{\displaystyle f:X\to Y}is continuous if and only if this is true off^:X/∼→Y.{\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y.}Furthermore,f:X→Y{\displaystyle f:X\to Y}is a quotient map if and only iff^:X/∼→Y{\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y}is ahomeomorphism(or equivalently, if and only if bothf^{\displaystyle {\hat {f}}}and its inverse are continuous). Ahereditarily quotient mapis a surjective mapf:X→Y{\displaystyle f:X\to Y}with the property that for every subsetT⊆Y,{\displaystyle T\subseteq Y,}the restrictionf|f−1(T):f−1(T)→T{\displaystyle f{\big \vert }_{f^{-1}(T)}~:~f^{-1}(T)\to T}is also a quotient map. There exist quotient maps that are not hereditarily quotient. Quotient mapsq:X→Y{\displaystyle q:X\to Y}are characterized among surjective maps by the following property: ifZ{\displaystyle Z}is any topological space andf:Y→Z{\displaystyle f:Y\to Z}is any function, thenf{\displaystyle f}is continuous if and only iff∘q{\displaystyle f\circ q}is continuous. The quotient spaceX/∼{\displaystyle X/{\sim }}together with the quotient mapq:X→X/∼{\displaystyle q:X\to X/{\sim }}is characterized by the followinguniversal property: ifg:X→Z{\displaystyle g:X\to Z}is a continuous map such thata∼b{\displaystyle a\sim b}impliesg(a)=g(b){\displaystyle g(a)=g(b)}for alla,b∈X,{\displaystyle a,b\in X,}then there exists a unique continuous mapf:X/∼→Z{\displaystyle f:X/{\sim }\to Z}such thatg=f∘q.{\displaystyle g=f\circ q.}In other words, the following diagram commutes: One says thatg{\displaystyle g}descends to the quotientfor expressing this, that is that it factorizes through the quotient space. The continuous maps defined onX/∼{\displaystyle X/{\sim }}are, therefore, precisely those maps which arise from continuous maps defined onX{\displaystyle X}that respect the equivalence relation (in the sense that they send equivalent elements to the same image). This criterion is copiously used when studying quotient spaces. Given a continuous surjectionq:X→Y{\displaystyle q:X\to Y}it is useful to have criteria by which one can determine ifq{\displaystyle q}is a quotient map. Two sufficient criteria are thatq{\displaystyle q}beopenorclosed. Note that these conditions are onlysufficient, notnecessary. It is easy to construct examples of quotient maps that are neither open nor closed. For topological groups, the quotient map is open. Separation Connectedness Compactness Dimension Topology Algebra
https://en.wikipedia.org/wiki/Quotient_space_(topology)
Incomputer networking,linear network codingis a program in which intermediate nodes transmit data from source nodes to sink nodes by means oflinear combinations. Linear network coding may be used to improve a network's throughput, efficiency, andscalability, as well as reducing attacks and eavesdropping. Thenodesof a network takeseveralpackets and combine for transmission. This process may be used to attain the maximum possibleinformationflowin anetwork. It has been proven that, theoretically,linear codingis enough to achieve the upper bound in multicast problems with one source.[1]However linear coding is not sufficient in general; even for more general versions of linearity such asconvolutional codingandfilter-bank coding.[2]Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can beNP-hard[3][4]and evenundecidable.[5][6] In a linear network coding problem, a group of nodesP{\displaystyle P}are involved in moving the data fromS{\displaystyle S}source nodes toK{\displaystyle K}sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them bycoefficientschosen from afinite field, typically of sizeGF(2s){\displaystyle GF(2^{s})}. More formally, each node,pk{\displaystyle p_{k}}withindegree,InDeg(pk)=S{\displaystyle InDeg(p_{k})=S}, generates a messageXk{\displaystyle X_{k}}from the linear combination of received messages{Mi}i=1S{\displaystyle \{M_{i}\}_{i=1}^{S}}by the formula: Where the valuesgki{\displaystyle g_{k}^{i}}are coefficients selected fromGF(2s){\displaystyle GF(2^{s})}. Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed valueXk{\displaystyle X_{k}}along with the coefficients,gki{\displaystyle g_{k}^{i}}, used in thekth{\displaystyle k^{\text{th}}}level,gki{\displaystyle g_{k}^{i}}. Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performingGaussian eliminationon the matrix.[7]In reduced row echelon form, decoded packets correspond to the rows of the formei=[0...010...0]{\displaystyle e_{i}=[0...010...0]} A network is represented by adirected graphG=(V,E,C){\displaystyle {\mathcal {G}}=(V,E,C)}.V{\displaystyle V}is the set of nodes or vertices,E{\displaystyle E}is the set of directed links (or edges), andC{\displaystyle C}gives the capacity of each link ofE{\displaystyle E}. LetT(s,t){\displaystyle T(s,t)}be the maximum possible throughput from nodes{\displaystyle s}to nodet{\displaystyle t}. By themax-flow min-cut theorem,T(s,t){\displaystyle T(s,t)}is upper bounded by the minimum capacity of allcuts, which is the sum of the capacities of the edges on a cut, between these two nodes. Karl Mengerproved that there is always a set of edge-disjoint paths achieving the upper bound in aunicastscenario, known as themax-flow min-cut theorem. Later, theFord–Fulkerson algorithmwas proposed to find such paths in polynomial time. Then, Edmonds proved in the paper "Edge-Disjoint Branchings"[which?]the upper bound in the broadcast scenario is also achievable, and proposed a polynomial time algorithm. However, the situation in themulticastscenario is more complicated, and in fact, such an upper bound can't be reached using traditionalroutingideas. Ahlswede et al. proved that it can be achieved if additional computing tasks (incoming packets are combined into one or several outgoing packets) can be done in the intermediate nodes.[8] The butterfly network[8]is often used to illustrate how linear network coding can outperformrouting. Two source nodes (at the top of the picture) have information A and B that must be transmitted to the two destination nodes (at the bottom). Each destination node wants to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot). If only routing were allowed, then the central link would be only able to carry A or B, but not both. Supposing we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B to both destinations simultaneously. Meanwhile, it takes four time slots in total for both destination nodes to know A and B. Using a simple code, as shown, A and B can be transmitted to both destinations simultaneously by sending the sum of the symbols through the two relay nodes – encoding A and B using the formula "A+B". The left destination receives A and A + B, and can calculate B by subtracting the two values. Similarly, the right destination will receive B and A + B, and will also be able to determine both A and B. Therefore, with network coding, it takes only three time slots and improves the throughput. Random linear network coding[9](RLNC) is a simple yet powerful encoding scheme, which in broadcast transmission schemes allows close to optimal throughput using a decentralized algorithm. Nodes transmit random linear combinations of the packets they receive, with coefficients chosen randomly, with a uniform distribution from a Galois field. If the field size is sufficiently large, the probability that the receiver(s) will obtain linearly independent combinations (and therefore obtain innovative information) approaches 1. It should however be noted that, although random linear network coding has excellent throughput performance, if a receiver obtains an insufficient number of packets, it is extremely unlikely that they can recover any of the original packets. This can be addressed by sending additional random linear combinations until the receiver obtains the appropriate number of packets. There are three key parameters in RLNC. The first one is the generation size. In RLNC, the original data transmitted over the network is divided into packets. The source and intermediate nodes in the network can combine and recombine the set of original and coded packets. The originalM{\displaystyle M}packets form a block, usually called a generation. The number of original packets combined and recombined together is the generation size. The second parameter is the packet size. Usually, the size of the original packets is fixed. In the case of unequally-sized packets, these can be zero-padded if they are shorter or split into multiple packets if they are longer. In practice, the packet size can be the size of themaximum transmission unit(MTU) of the underlying network protocol. For example, it can be around 1500 bytes in anEthernet frame. The third key parameter is the Galois field used. In practice, the most commonly used Galois fields are binary extension fields. And the most commonly used sizes for the Galois fields are the binary fieldGF(2){\displaystyle GF(2)}and the so-called binary-8 (GF(28){\displaystyle GF(2^{8})}). In the binary field, each element is one bit long, while in the binary-8, it is one byte long. Since the packet size is usually larger than the field size, each packet is seen as a set of elements from the Galois field (usually referred to as symbols) appended together. The packets have a fixed amount of symbols (Galois field elements), and since all the operations are performed over Galois fields, then the size of the packets does not change with subsequent linear combinations. The sources and the intermediate nodes can combine any subset of the original and previously coded packets performing linear operations. To form a coded packet in RLNC, the original and previously coded packets are multiplied by randomly chosen coefficients and added together. Since each packet is just an appended set of Galois field elements, the operations of multiplication and addition are performed symbol-wise over each of the individual symbols of the packets, as shown in the picture from the example. To preserve the statelessness of the code, the coding coefficients used to generate the coded packets are appended to the packets transmitted over the network. Therefore, each node in the network can see what coefficients were used to generate each coded packet. One novelty of linear network coding over traditional block codes is that it allows the recombination of previously coded packets into new and valid coded packets. This process is usually called recoding. After a recoding operation, the size of the appended coding coefficients does not change. Since all the operations are linear, the state of the recoded packet can be preserved by applying the same operations of addition and multiplication to the payload and the appended coding coefficients. In the following example, we will illustrate this process. Any destination node must collect enough linearly independent coded packets to be able to reconstruct the original data. Each coded packet can be understood as a linear equation where the coefficients are known since they are appended to the packet. In these equations, each of the originalM{\displaystyle M}packets is the unknown. To solve the linear system of equations, the destination needs at leastM{\displaystyle M}linearly independent equations (packets). In the figure, we can see an example of two packets linearly combined into a new coded packet. In the example, we have two packets, namely packetf{\displaystyle f}and packete{\displaystyle e}. The generation size of our example is two. We know this because each packet has two coding coefficients (Cij{\displaystyle C_{ij}}) appended. The appended coefficients can take any value from the Galois field. However, an original, uncoded data packet would have appended the coding coefficients[0,1]{\displaystyle [0,1]}or[1,0]{\displaystyle [1,0]}, which means that they are constructed by a linear combination of zero times one of the packets plus one time the other packet. Any coded packet would have appended other coefficients. In our example, packetf{\displaystyle f}for instance has appended the coefficients[C11,C12]{\displaystyle [C_{11},C_{12}]}. Since network coding can be applied at any layter of the communication protocol, these packets can have a header from the other layers, which is ignored in the network coding operations. Now, lets assume that the network node wants to produce a new coded packet combining packetf{\displaystyle f}and packete{\displaystyle e}. In RLNC, it will randomly choose two coding coefficients,d1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}in the example. The node will multiply each symbol of packetf{\displaystyle f}byd1{\displaystyle d_{1}}, and each symbol of packete{\displaystyle e}byd2{\displaystyle d_{2}}. Then, it will add the results symbol-wise to produce the new coded data. It will perform the same operations of multiplication and addition to the coding coefficients of the coded packets. Linear network coding is still a relatively new subject. However, the topic has been vastly researched over the last twenty years. Nevertheless, there are still some misconceptions that are no longer valid: Decoding computational complexity:Network coding decoders have been improved over the years. Nowadays, the algorithms are highly efficient and parallelizable. In 2016, with Intel Core i5 processors withSIMDinstructions enabled, the decoding goodput of network coding was 750 MB/s for a generation size of 16 packets and 250 MB/s for a generation size of 64 packets.[10]Furthermore, today's algorithms can be vastly parallelizable, increasing the encoding and decoding goodput even further.[11] Transmission Overhead:It is usually thought that the transmission overhead of network coding is high due to the need to append the coding coefficients to each coded packet. In reality, this overhead is negligible in most applications. The overhead due to coding coefficients can be computed as follows. Each packet has appendedM{\displaystyle M}coding coefficients. The size of each coefficient is the number of bits needed to represent one element of the Galois field. In practice, most network coding applications use a generation size of no more than 32 packets per generation and Galois fields of 256 elements (binary-8). With these numbers, each packet needsM∗log2(s)=32{\displaystyle M*log_{2}(s)=32}bytes of appended overhead. If each packet is 1500 bytes long (i.e. the Ethernet MTU), then 32 bytes represent an overhead of only 2%. Overhead due to linear dependencies:Since the coding coefficients are chosen randomly in RLNC, there is a chance that some transmitted coded packets are not beneficial to the destination because they are formed using a linearly dependent combination of packets. However, this overhead is negligible in most applications. The linear dependencies depend on the Galois fields' size and are practically independent of the generation size used. We can illustrate this with the following example. Let us assume we are using a Galois field ofq{\displaystyle q}elements and a generation size ofM{\displaystyle M}packets. If the destination has not received any coded packet, we say it hasM{\displaystyle M}degrees of freedom, and then almost any coded packet will be useful and innovative. In fact, only the zero-packet (only zeroes in the coding coefficients) will be non-innovative. The probability of generating the zero-packet is equal to the probability of each of theM{\displaystyle M}coding coefficient to be equal to the zero-element of the Galois field. I.e., the probability of a non-innovative packet is of1qM{\displaystyle {\frac {1}{q^{M}}}}. With each successive innovative transmission, it can be shown that the exponent of the probability of a non innovative packet is reduced by one. When the destination has receivedM−1{\displaystyle M-1}innovative packets (i.e., it needs only one more packet to fully decode the data). Then the probability of a non innovative packet is of1q{\displaystyle {\frac {1}{q}}}. We can use this knowledge to calculate the expected number of linearly dependent packets per generation. In the worst-case scenario, when the Galois field used contains only two elements (q=2{\displaystyle q=2}), the expected number of linearly dependent packets per generation is of 1.6 extra packets. If our generation size if of 32 or 64 packets, this represents an overhead of 5% or 2.5%, respectively. If we use the binary-8 field (q=256{\displaystyle q=256}), then the expected number of linearly dependent packets per generation is practically zero. Since it is the last packets the major contributor to the overhead due to linear dependencies, there are RLNC-based protocols such as tunable sparse network coding[12]that exploit this knowledge. These protocols introduce sparsity (zero-elements) in the coding coefficients at the beginning of the transmission to reduce the decoding complexity, and reduce the sparsity at the end of the transmission to reduce the overhead due to linear dependencies. Over the years, multiple researchers and companies have integrated network coding solutions into their applications.[13]We can list some of the applications of network coding in different areas:
https://en.wikipedia.org/wiki/Network_coding
Instatistics, theintraclass correlation, or theintraclass correlation coefficient(ICC),[1]is adescriptive statisticthat can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly units in the same group resemble each other. While it is viewed as a type ofcorrelation, unlike most other correlation measures, it operates on data structured as groups rather than data structured as paired observations. Theintraclass correlationis commonly used to quantify the degree to which individuals with a fixed degree of relatedness (e.g. full siblings) resemble each other in terms of a quantitative trait (seeheritability). Another prominent application is the assessment of consistency or reproducibility of quantitative measurements made by different observers measuring the same quantity. The earliest work on intraclass correlations focused on the case of paired measurements, and the first intraclass correlation (ICC) statistics to be proposed were modifications of theinterclass correlation(Pearson correlation). Consider a data set consisting ofNpaired data values (xn,1,xn,2), forn= 1, ...,N. The intraclass correlationroriginally proposed[2]byRonald Fisher[3]is where Later versions of this statistic[3]used thedegrees of freedom2N−1 in the denominator for calculatings2andN−1 in the denominator for calculatingr, so thats2becomes unbiased, andrbecomes unbiased ifsis known. The key difference between this ICC and theinterclass (Pearson) correlationis that the data are pooled to estimate the mean and variance. The reason for this is that in the setting where an intraclass correlation is desired, the pairs are considered to be unordered. For example, if we are studying the resemblance of twins, there is usually no meaningful way to order the values for the two individuals within a twin pair. Like the interclass correlation, the intraclass correlation for paired data will be confined to theinterval[−1, +1]. The intraclass correlation is also defined for data sets with groups having more than 2 values. For groups consisting of three values, it is defined as[3] where As the number of items per group grows, so does the number of cross-product terms in this expression grows. The following equivalent form is simpler to calculate: whereKis the number of data values per group, andx¯n{\displaystyle {\bar {x}}_{n}}is the sample mean of thenthgroup.[3]This form is usually attributed toHarris.[4]The left term is non-negative; consequently the intraclass correlation must satisfy For largeK, this ICC is nearly equal to which can be interpreted as the fraction of the total variance that is due to variation between groups.Ronald Fisherdevotes an entire chapter to intraclass correlation in his classic bookStatistical Methods for Research Workers.[3] For data from a population that is completely noise, Fisher's formula produces ICC values that are distributed about 0, i.e. sometimes being negative. This is because Fisher designed the formula to be unbiased, and therefore its estimates are sometimes overestimates and sometimes underestimates. For small or 0 underlying values in the population, the ICC calculated from a sample may be negative. Beginning with Ronald Fisher, the intraclass correlation has been regarded within the framework ofanalysis of variance(ANOVA), and more recently in the framework ofrandom effects models. A number of ICC estimators have been proposed. Most of the estimators can be defined in terms of the random effects model whereYijis theithobservation in thejthgroup,μis an unobserved overallmean,αjis an unobserved random effect shared by all values in groupj, andεijis an unobserved noise term.[5]For the model to be identified, theαjandεijare assumed to have expected value zero and to be uncorrelated with each other. Also, theαjare assumed to be identically distributed, and theεijare assumed to be identically distributed. The variance ofαjis denotedσ2αand the variance ofεijis denotedσ2ε. The population ICC in this framework is[6] With this framework, the ICC is thecorrelationof two observations from the same group. For a one-way random effects model: Yij=μ+αi+ϵij{\displaystyle Y_{ij}=\mu +\alpha _{i}+\epsilon _{ij}} αi∼N(0,σα2){\displaystyle \alpha _{i}\sim N(0,\sigma _{\alpha }^{2})},ϵij∼N(0,σε2){\displaystyle \epsilon _{ij}\sim N(0,\sigma _{\varepsilon }^{2})},αi{\displaystyle \alpha _{i}}s andϵij{\displaystyle \epsilon _{ij}}s independent andαi{\displaystyle \alpha _{i}}s are independent fromϵij{\displaystyle \epsilon _{ij}}s. The variance of any observation is:Var(Yij)=σε2+σα2{\displaystyle Var(Y_{ij})=\sigma _{\varepsilon }^{2}+\sigma _{\alpha }^{2}}The covariance of two observations from the same groupi{\displaystyle i}(forj≠k{\displaystyle j\neq k}) is:[7] Cov(Yij,Yik)=Cov(μ+αi+ϵij,μ+αi+ϵik)=Cov(αi+ϵij,αi+ϵik)=Cov(αi,αi)+2Cov(αi,ϵik)+Cov(ϵij,ϵik)=Cov(αi,αi)=Var(αi)=σα2.{\displaystyle {\begin{aligned}{\text{Cov}}(Y_{ij},Y_{ik})&={\text{Cov}}(\mu +\alpha _{i}+\epsilon _{ij},\mu +\alpha _{i}+\epsilon _{ik})\\&={\text{Cov}}(\alpha _{i}+\epsilon _{ij},\alpha _{i}+\epsilon _{ik})\\&={\text{Cov}}(\alpha _{i},\alpha _{i})+2{\text{Cov}}(\alpha _{i},\epsilon _{ik})+{\text{Cov}}(\epsilon _{ij},\epsilon _{ik})\\&={\text{Cov}}(\alpha _{i},\alpha _{i})\\&={\text{Var}}(\alpha _{i})\\&=\sigma _{\alpha }^{2}.\\\end{aligned}}} In this, we've usedproperties of the covariance. Put together we get:Cor(Yij,Yik)=Cov(Yij,Yik)Var(Yij)Var(Yik)=σα2σε2+σα2{\displaystyle {\text{Cor}}(Y_{ij},Y_{ik})={\frac {{\text{Cov}}(Y_{ij},Y_{ik})}{\sqrt {Var(Y_{ij})Var(Y_{ik})}}}={\frac {\sigma _{\alpha }^{2}}{\sigma _{\varepsilon }^{2}+\sigma _{\alpha }^{2}}}} An advantage of this ANOVA framework is that different groups can have different numbers of data values, which is difficult to handle using the earlier ICC statistics. This ICC is always non-negative, allowing it to be interpreted as the proportion of total variance that is "between groups." This ICC can be generalized to allow for covariate effects, in which case the ICC is interpreted as capturing the within-class similarity of the covariate-adjusted data values.[8] This expression can never be negative (unlike Fisher's original formula) and therefore, in samples from a population which has an ICC of 0, the ICCs in the samples will be higher than the ICC of the population. A number of different ICC statistics have been proposed, not all of which estimate the same population parameter. There has been considerable debate about which ICC statistics are appropriate for a given use, since they may produce markedly different results for the same data.[9][10] In terms of its algebraic form, Fisher's original ICC is the ICC that most resembles thePearson correlation coefficient. One key difference between the two statistics is that in the ICC, the data are centered and scaled using a pooled mean and standard deviation, whereas in the Pearson correlation, each variable is centered and scaled by its own mean and standard deviation. This pooled scaling for the ICC makes sense because all measurements are of the same quantity (albeit on units in different groups). For example, in a paired data set where each "pair" is a single measurement made for each of two units (e.g., weighing each twin in a pair of identical twins) rather than two different measurements for a single unit (e.g., measuring height and weight for each individual), the ICC is a more natural measure of association than Pearson's correlation. An important property of the Pearson correlation is that it is invariant to application of separatelinear transformationsto the two variables being compared. Thus, if we are correlatingXandY, where, say,Y= 2X+ 1, the Pearson correlation betweenXandYis 1 — a perfect correlation. This property does not make sense for the ICC, since there is no basis for deciding which transformation is applied to each value in a group. However, if all the data in all groups are subjected to the same linear transformation, the ICC does not change. The ICC is used to assess the consistency, or conformity, of measurements made by multiple observers measuring the same quantity.[11]For example, if several physicians are asked to score the results of a CT scan for signs of cancer progression, we can ask how consistent the scores are to each other. If the truth is known (for example, if the CT scans were on patients who subsequently underwent exploratory surgery), then the focus would generally be on how well the physicians' scores matched the truth. If the truth is not known, we can only consider the similarity among the scores. An important aspect of this problem is that there is bothinter-observerand intra-observer variability. Inter-observer variability refers to systematic differences among the observers — for example, one physician may consistently score patients at a higher risk level than other physicians. Intra-observer variability refers to deviations of a particular observer's score on a particular patient that are not part of a systematic difference. The ICC is constructed to be applied toexchangeablemeasurements — that is, grouped data in which there is no meaningful way to order the measurements within a group. In assessing conformity among observers, if the same observers rate each element being studied, then systematic differences among observers are likely to exist, which conflicts with the notion of exchangeability. If the ICC is used in a situation where systematic differences exist, the result is a composite measure of intra-observer and inter-observer variability. One situation where exchangeability might reasonably be presumed to hold would be where a specimen to be scored, say a blood specimen, is divided into multiple aliquots, and the aliquots are measured separately on the same instrument. In this case, exchangeability would hold as long as no effect due to the sequence of running the samples was present. Since theintraclass correlation coefficientgives a composite of intra-observer and inter-observer variability, its results are sometimes considered difficult to interpret when the observers are not exchangeable. Alternative measures such as Cohen'skappa statistic, theFleiss kappa, and theconcordance correlation coefficient[12]have been proposed as more suitable measures of agreement among non-exchangeable observers. ICC is supported in the open source software packageR(using the function "icc" with the packagespsyorirr, or via the function "ICC" in the packagepsych.) TherptRpackage[13]provides methods for the estimation of ICC and repeatabilities for Gaussian, binomial and Poisson distributed data in a mixed-model framework. Notably, the package allows estimation of adjusted ICC (i.e. controlling for other variables) and computes confidence intervals based on parametric bootstrapping and significances based on the permutation of residuals. Commercial software also supports ICC, for instanceStataorSPSS[14] The three models are: Number of measurements: Consistency or absolute agreement: The consistency ICC cannot be estimated in the one-way random effects model, as there is no way to separate the inter-rater and residual variances. An overview and re-analysis of the three models for the single measures ICC, with an alternative recipe for their use, has also been presented by Liljequist et al. (2019).[18] Cicchetti (1994)[19]gives the following often quoted guidelines for interpretation forkappaor ICC inter-rater agreement measures: A different guideline is given by Koo and Li (2016):[20]
https://en.wikipedia.org/wiki/Intraclass_correlation
Web GIS, also known asWeb-based GIS, areGeographic Information Systems(GIS) that employ theWorld Wide Web(the Web) to facilitate the storage, visualization, analysis, and distribution ofspatial informationover theInternet.[1][2][3][4][5][6]Web GIS involves using the Web to facilitate GIS tasks traditionally done on a desktop computer, as well as enabling the sharing of maps and spatial data. Web GIS is a subset ofInternet GIS, which is itself a subset ofdistributed GIS.[5][6][7][8][9][10]The most common application of Web GIS isWeb mapping, so much so that the two terms are often used interchangeably in much the same way as betweendigital mappingand GIS. However, Web GIS and web mapping are distinct concepts, with web mapping not necessarily requiring a Web GIS.[5] The use of the Web has dramatically increased the effectiveness of both accessing and distributing spatial data, two of the most significant challenges of desktop GIS.[1][11][12]Many functions, such as interactivity, and dynamic scaling, are made widely available to end users by web services.[13]The scale of the Web can sometimes make finding quality and reliable data a challenge for GIS professionals and end users, with a significant amount of low-quality, poorly organized, or poorly sourced material available for public consumption.[12][13]This can make finding spatial data a time consuming activity for GIS users.[12] The history of Web GIS is very closely tied to the history of geographic information systems,Digital mapping, and theWorld Wide Webor the Web. The Web was first created in 1990, and the first major web mapping program capable of distributed map creation appeared shortly after in 1993.[8][11][14]This software, named PARC Map Viewer, was unique in that it facilitated dynamic user map generation, rather than static images.[14][15]This software also allowed users to employ GIS without having it locally installed on their machine.[1][14]The US federal government made the TIGER Mapping Service available to the public in 1995, which facilitated desktop and Web GIS by hosting US boundary data.[1][16]In 1996,MapQuestbecame available to the public, facilitating navigation and trip planning, which quickly became a major utility on the early Web.[1][13] In 1997,Esribegan to focus on their desktop GIS software, which in 2000 becameArcGIS.[17]This led to Esri dominating the GIS industry for the next several years.[11]In 2000 Esri launched the Geography Network, which offered some web GIS functions. In 2014, ArcGIS Online replaced this, and offers significant Web GIS functions including hosting, manipulating, and visualizing data in dynamic applications.[1][2][11] Web GIS has numerous applications and functions and manages most distributed spatial information.[18]Diverse industries and disciplines, including mathematics, history, business and education can all leverage Web GIS to integrate geographic approaches to data.[18] The United States Census Department extensively uses Web GIS to distribute its boundary data, such as TIGER files, and demographics to the public.[1][16]The "2020 Census Demographic Data Map Viewer" runs on an ESRI Web Map Application, and provides demographic information, such as population, race, and housing information at the state, county, and census tract levels.[19][20] Literature has identified educational benefits and applications of Web GIS at the elementary, primary, and university levels of education.[18][21]Using story maps and dashboards allows for new ways of displaying spatial data, and facilitates student interaction.[18]As Web GIS tools are often user friendly, teachers can create their own visualizations for the classroom, or even have students make their own to teach geographic concepts.[21] Web GIS has been used extensively in public health to communicate health data to the public and policymakers.[22]During the COVID-19 Pandemic, dashboard Web GIS Apps were popularized as a template for displaying health data byJohns Hopkins University, which was updated until March 10th, 2023.[22][23]In the United States, all 50 state governments, the CDC, and others ultimately made use of these tools.[24]These dashboards displayed various information but generally included a choropleth map showing COVID-19 case data.[24] Web GIS has numerous functions, which can be divided into categories of Geospatial web services, includingweb feature services, web processing services, andweb mapping services.[3]Geospatial web services are distinct software packages available on the World Wide Web that can be employed to perform a function with spatial data.[3] Web feature services allow users to access, edit, and make use of hosted geospatial feature datasets.[3] Web processing services allow users to perform GIS calculations on spatial data.[3]Web processing services standardize inputs, and outputs, for spatial data within an internet GIS and may have standardized algorithms forspatial statistics. Web mapping involves using distributed tools to create and host both static and dynamic maps.[8][3][1][2]It is different than desktopdigital mappingin that the data, software, or both might not be stored locally and are often distributed across many computers. Web mapping allows for the rapid distribution of spatial visualizations without the need for printing.[25]They also facilitates rapid updating to reflect new datasets and allow for interactive datasets that would be impossible in print media. Web mapping was employed extensively during theCOVID-19pandemic to visualize the datasets in close to real-time.[26][27][28] In terms of interoperability, the use of communication standards in Distributed GIS is particularly important. General standards forGeospatialData have been developed by theOpen Geospatial Consortium(OGC). For the exchange of Geospatial Data over the web, the most important OGC standards areWeb Map Service(WMS) andWeb Feature Service(WFS). Using OGC-compliantgatewaysallows for building very flexible Distributed GI Systems. Unlike monolithic GI Systems, OGC compliant systems are naturallyweb-basedand do not have strict definitions ofserversand clients. For instance, if a user (client) accesses a server, that server itself can act as a client of a number of further servers in order to retrieve data requested by theuser. This concept allows for data retrieval from any number of different sources, providing consistent data standards are used. This concept allows data transfer with systems not capable of GIS functionality. A key function ofOGC standardsis the integration of different systems already existing and thus geo-enabling the web.Web servicesproviding different functionality can be used simultaneously to combine data from different sources (mash-ups). Thus, different services on distributed servers can be combined for ‘service-chaining’ in order to add additional value to existing services. Providing a wide use of OGC standards by different web services, sharing distributed data of multiple organizations becomes possible. Some important languages used in OGC-compliant systems are described in the following.XMLstands for eXtensible Markup language and is widely used for displaying and interpreting data from computers. Thus the development of a web-based GI system requires several useful XML encodings that can effectively describe two-dimensional graphics such as mapsSVGand, at the same time, store and transfer simple featuresGML. Because GML and SVG are both XML encodings, it is very straightforward to convert between the two using an XML Style Language TransformationXSLT. This gives an application a means of rendering GML, and in fact, is the primary way that it has been accomplished among existing applications today.[30]XML can introduce innovative web services, in terms of GIS. It allows geographic information to be easily translated in graphics and in these terms, scalar vector graphics (SVG) can produce high-quality dynamic outputs by using data retrieved from spatial databases. In the same aspect, Google, one of the pioneers in web-based GIS, has developed its own language, which also uses an XML structure.Keyhole Markup Language(KML) is a file format used to display geographic data in an earth browser, such as Google Earth, Google Maps, and Google Maps for mobile browsers"Google KML definition". Retrieved2007-11-21. The Geospatial Semantic Web is a vision to include geospatial information at the core of theSemantic Webto facilitateinformation retrievalandinformation integration.[31]This vision requires the definition of geospatialontologies, semanticgazetteers, and shared technical vocabularies to describegeographic phenomena.[32]The Semantic Geospatial Web is part ofgeographic information science.[3] All maps are simplifications of reality and, therefore, can never be perfectly accurate.[33]These inaccuracies include distortions introduced during projection, simplifications, and human error. While traditionally trained ethical cartographers try to minimize these errors and document the known sources of error, including where the data originated, Web GIS facilitates the creation of maps by non-traditionally trained cartographers and, more significantly, facilitates the rapid dissemination of their potentially erroneous maps.[16][13][34]While this democratization of GIS has many potential positives, including empowering traditionally disenfranchised groups of people, it also means that a wide audience can see bad maps.[25][28][33][35]Further, malicious actors can quickly spread intentionally misleading spatial information while hiding the source.[33]This has significant implications, and contributes to theinfodemicsurrounding many topics, including the spread of potentially misleading information on theCOVID-19pandemic.[22][24]Even a map made by a skilled cartographer has significant limitations over traditional distribution methods when using the Web. Among a variety of issues, computer monitors have a variety of different color settings and sizes.[13][36]This renders ratio, representative fraction, and verbal scales useless, leaving only the scale bar. It also means a color choice selected by the cartographer might not be what the end-user experiences.[13][36]These issues are not limited to cartography but are difficult to solve. Due to the nature of the Web, using it for storing and computation is less secure than using local networks.[37][38][39]When working with sensitive data, Web GIS may expose an organization to the additional risk of having its data breached then if they use dedicated hardware and avirtual private network(VPN) to access that hardware remotely over the internet.[37][38][39]The convenience and relatively low cost of Web GIS often prevents this from being implemented. As Web GIS is built on the web, it is subject tolink rotphenomena.[24]This phenomenon can lead to previously available data being lost due to users changing the URL, physical hardware failures, or the content being deleted by the publisher. If the hardware and information accessed within a Web GIS is lost, "a single disk failure could be like the burning of the library at Alexandria."[40]One study found that 23% of COVID-19 Dashboards available on government sites on February of 2021 were no longer available at the previous URLs by April of 2023.[24]
https://en.wikipedia.org/wiki/Web_GIS
Inalgebraic geometry, anétale morphism(French:[etal]) is a morphism ofschemesthat isformally étaleand locally of finite presentation. This is an algebraic analogue of the notion of a local isomorphism in the complex analytic topology. They satisfy the hypotheses of theimplicit function theorem, but because open sets in theZariski topologyare so large, they are not necessarily local isomorphisms. Despite this, étale maps retain many of the properties of local analytic isomorphisms, and are useful in defining thealgebraic fundamental groupand theétale topology. The wordétaleis a Frenchadjective, which means "slack", as in "slack tide", or, figuratively, calm, immobile, something left to settle.[1] Letϕ:R→S{\displaystyle \phi :R\to S}be aring homomorphism. This makesS{\displaystyle S}anR{\displaystyle R}-algebra. Choose amonic polynomialf{\displaystyle f}inR[x]{\displaystyle R[x]}and a polynomialg{\displaystyle g}inR[x]{\displaystyle R[x]}such that thederivativef′{\displaystyle f'}off{\displaystyle f}is a unit in(R[x]/fR[x])g{\displaystyle (R[x]/fR[x])_{g}}. We say thatϕ{\displaystyle \phi }isstandard étaleiff{\displaystyle f}andg{\displaystyle g}can be chosen so thatS{\displaystyle S}is isomorphic as anR{\displaystyle R}-algebra to(R[x]/fR[x])g{\displaystyle (R[x]/fR[x])_{g}}andϕ{\displaystyle \phi }is the canonical map. Letf:X→Y{\displaystyle f:X\to Y}be amorphism of schemes. We say thatf{\displaystyle f}isétaleif it has any of the following equivalent properties: Assume thatY{\displaystyle Y}is locally noetherian andfis locally of finite type. Forx{\displaystyle x}inX{\displaystyle X}, lety=f(x){\displaystyle y=f(x)}and letO^Y,y→O^X,x{\displaystyle {\hat {\mathcal {O}}}_{Y,y}\to {\hat {\mathcal {O}}}_{X,x}}be the induced map oncompletedlocal rings. Then the following are equivalent: If in addition all the maps on residue fieldsκ(y)→κ(x){\displaystyle \kappa (y)\to \kappa (x)}are isomorphisms, or ifκ(y){\displaystyle \kappa (y)}is separably closed, thenf{\displaystyle f}is étale if and only if for everyx{\displaystyle x}inX{\displaystyle X}, the induced map on completed local rings is an isomorphism.[7] Anyopen immersionis étale because it is locally an isomorphism. Covering spaces form examples of étale morphisms. For example, ifd≥1{\displaystyle d\geq 1}is an integer invertible in the ringR{\displaystyle R}then is a degreed{\displaystyle d}étale morphism. Anyramified coveringπ:X→Y{\displaystyle \pi :X\to Y}has an unramified locus which is étale. Morphisms induced by finite separable field extensions are étale — they formarithmetic covering spaceswith group of deck transformations given byGal(L/K){\displaystyle {\text{Gal}}(L/K)}. Any ring homomorphism of the formR→S=R[x1,…,xn]g/(f1,…,fn){\displaystyle R\to S=R[x_{1},\ldots ,x_{n}]_{g}/(f_{1},\ldots ,f_{n})}, where all thefi{\displaystyle f_{i}}are polynomials, and where theJacobiandeterminantdet(∂fi/∂xj){\displaystyle \det(\partial f_{i}/\partial x_{j})}is a unit inS{\displaystyle S}, is étale. For example the morphismC[t,t−1]→C[x,t,t−1]/(xn−t){\displaystyle \mathbb {C} [t,t^{-1}]\to \mathbb {C} [x,t,t^{-1}]/(x^{n}-t)}is etale and corresponds to a degreen{\displaystyle n}covering space ofGm∈Sch/C{\displaystyle \mathbb {G} _{m}\in Sch/\mathbb {C} }with the groupZ/n{\displaystyle \mathbb {Z} /n}of deck transformations. Expanding upon the previous example, suppose that we have a morphismf{\displaystyle f}of smooth complex algebraic varieties. Sincef{\displaystyle f}is given by equations, we can interpret it as a map of complex manifolds. Whenever the Jacobian off{\displaystyle f}is nonzero,f{\displaystyle f}is a local isomorphism of complex manifolds by theimplicit function theorem. By the previous example, having non-zero Jacobian is the same as being étale. Letf:X→Y{\displaystyle f:X\to Y}be a dominant morphism of finite type withX,Ylocally noetherian, irreducible andYnormal. Iffisunramified, then it is étale.[9] For a fieldK, anyK-algebraAis necessarily flat. Therefore,Ais an etale algebra if and only if it is unramified, which is also equivalent to whereK¯{\displaystyle {\bar {K}}}is theseparable closureof the fieldKand the right hand side is a finite direct sum, all of whose summands areK¯{\displaystyle {\bar {K}}}. This characterization of etaleK-algebras is a stepping stone in reinterpreting classicalGalois theory(seeGrothendieck's Galois theory). Étale morphisms are the algebraic counterpart of localdiffeomorphisms. More precisely, a morphism between smooth varieties is étale at a point iff the differential between the correspondingtangent spacesis an isomorphism. This is in turn precisely the condition needed to ensure that a map betweenmanifoldsis a local diffeomorphism, i.e. for any pointy∈Y, there is anopenneighborhoodUofxsuch that the restriction offtoUis a diffeomorphism. This conclusion does not hold in algebraic geometry, because the topology is too coarse. For example, consider the projectionfof theparabola to they-axis. This morphism is étale at every point except the origin (0, 0), because the differential is given by 2x, which does not vanish at these points. However, there is no (Zariski-)local inverse off, just because thesquare rootis not analgebraic map, not being given by polynomials. However, there is a remedy for this situation, using the étale topology. The precise statement is as follows: iff:X→Y{\displaystyle f:X\to Y}is étale and finite, then for any pointylying inY, there is an étale morphismV→Ycontainingyin its image (Vcan be thought of as an étale open neighborhood ofy), such that when we base changeftoV, thenX×YV→V{\displaystyle X\times _{Y}V\to V}(the first member would be the pre-image ofVbyfifVwere a Zariski open neighborhood) is a finite disjoint union of open subsets isomorphic toV. In other words,étale-locallyinY, the morphismfis a topological finite cover. For a smooth morphismf:X→Y{\displaystyle f:X\to Y}of relative dimensionn,étale-locallyinXand inY,fis an open immersion into an affine spaceAYn{\displaystyle \mathbb {A} _{Y}^{n}}. This is the étale analogue version of the structure theorem onsubmersions.
https://en.wikipedia.org/wiki/%C3%89tale_morphism
Inmathematics, ahomogeneous distributionis adistributionSonEuclidean spaceRnorRn\ {0} that ishomogeneousin the sense that, roughly speaking, for allt> 0. More precisely, letμt:x↦x/t{\displaystyle \mu _{t}:x\mapsto x/t}be the scalar division operator onRn. A distributionSonRnorRn\ {0} is homogeneous of degreemprovided that for all positive realtand all test functions φ. The additional factor oft−nis needed to reproduce the usual notion of homogeneity for locally integrable functions, and comes about from theJacobian change of variables. The numbermcan be real or complex. It can be a non-trivial problem to extend a given homogeneous distribution fromRn\ {0} to a distribution onRn, although this is necessary for many of the techniques ofFourier analysis, in particular theFourier transform, to be brought to bear. Such an extension exists in most cases, however, although it may not be unique. IfSis a homogeneous distribution onRn\ {0} of degree α, then theweakfirst partial derivativeofS has degree α−1. Furthermore, a version ofEuler's homogeneous function theoremholds: a distributionSis homogeneous of degree α if and only if A complete classification of homogeneous distributions in one dimension is possible. The homogeneous distributions onR\ {0} are given by variouspower functions. In addition to the power functions, homogeneous distributions onRinclude theDirac delta functionand its derivatives. The Dirac delta function is homogeneous of degree −1. Intuitively, by making a change of variablesy=txin the "integral". Moreover, thekth weak derivative of the delta function δ(k)is homogeneous of degree −k−1. These distributions all have support consisting only of the origin: when localized overR\ {0}, these distributions are all identically zero. In one dimension, the function is locally integrable onR\ {0}, and thus defines a distribution. The distribution is homogeneous of degree α. Similarlyx−α=(−x)+α{\displaystyle x_{-}^{\alpha }=(-x)_{+}^{\alpha }}and|x|α=x+α+x−α{\displaystyle |x|^{\alpha }=x_{+}^{\alpha }+x_{-}^{\alpha }}are homogeneous distributions of degree α. However, each of these distributions is only locally integrable on all ofRprovided Re(α) > −1. But although the functionx+α{\displaystyle x_{+}^{\alpha }}naively defined by the above formula fails to be locally integrable for Re α ≤ −1, the mapping is aholomorphic functionfrom the right half-plane to thetopological vector spaceof tempered distributions. It admits a uniquemeromorphicextension with simple poles at each negative integerα = −1, −2, .... The resulting extension is homogeneous of degree α, provided α is not a negative integer, since on the one hand the relation holds and is holomorphic in α > 0. On the other hand, both sides extend meromorphically in α, and so remain equal throughout the domain of definition. Throughout the domain of definition,xα+also satisfies the following properties: There are several distinct ways to extend the definition of power functions to homogeneous distributions onRat the negative integers. The poles inxα+at the negative integers can be removed by renormalizing. Put This is anentire functionof α. At the negative integers, The distributionsχ+a{\displaystyle \chi _{+}^{a}}have the properties A second approach is to define the distributionx_−k{\displaystyle {\underline {x}}^{-k}}, fork= 1, 2, ..., These clearly retain the original properties of power functions: These distributions are also characterized by their action on test functions and so generalize theCauchy principal valuedistribution of 1/xthat arises in theHilbert transform. Another homogeneous distribution is given by the distributional limit That is, acting on test functions The branch of the logarithm is chosen to be single-valued in the upper half-plane and to agree with the natural log along the positive real axis. As the limit of entire functions,(x+ i0)α[φ]is an entire function of α. Similarly, is also a well-defined distribution for all α When Re α > 0, which then holds by analytic continuation whenever α is not a negative integer. By the permanence of functional relations, At the negative integers, the identity holds (at the level of distributions onR\ {0}) and the singularities cancel to give a well-defined distribution onR. The average of the two distributions agrees withx_−k{\displaystyle {\underline {x}}^{-k}}: The difference of the two distributions is a multiple of the delta function: which is known as thePlemeljjump relation. The followingclassification theoremholds (Gel'fand & Shilov 1966, §3.11). LetSbe a distribution homogeneous of degree α onR\ {0}. ThenS=ax+α+bx−α{\displaystyle S=ax_{+}^{\alpha }+bx_{-}^{\alpha }}for some constantsa,b. Any distributionSonRhomogeneous of degreeα ≠ −1, −2, ...is of this form as well. As a result, every homogeneous distribution of degreeα ≠ −1, −2, ...onR\ {0} extends toR. Finally, homogeneous distributions of degree −k, a negative integer, onRare all of the form: Homogeneous distributions on the Euclidean spaceRn\ {0} with the origin deleted are always of the form whereƒis a distribution on the unit sphereSn−1. The number λ, which is the degree of the homogeneous distributionS, may be real or complex. Any homogeneous distribution of the form (1) onRn\ {0} extends uniquely to a homogeneous distribution onRnprovidedRe λ > −n. In fact, an analytic continuation argument similar to the one-dimensional case extends this for allλ ≠ −n, −n−1, ....
https://en.wikipedia.org/wiki/Homogeneous_distribution
PDF417is a stacked linearbarcodeformat used in a variety of applications such as transport, identification cards, and inventory management. "PDF" stands forPortable Data File, while "417" signifies that each pattern in the code consists of 4 bars and spaces in a pattern that is 17 units (modules) long. The PDF417 symbology was invented by Dr. Ynjiun P. Wang atSymbol Technologiesin 1991.[1]It is defined in ISO 15438. The PDF417 bar code (also called asymbol) consists of 3 to 90 rows, each of which is like a small linear bar code. Each row has: All rows are the same width; each row has the same number of codewords. PDF417 uses abase929 encoding. Each codeword represents a number from 0 to 928. The codewords are represented by patterns of dark (bar) and light (space) regions. Each of these patterns contains four bars and four spaces (where the 4 in the name comes from). The total width is 17 times the width of the narrowest allowed vertical bar (the X dimension); this is where the 17 in the name comes from. Each pattern starts with a bar and ends with a space. The row height must be at least 3 times the minimum width: Y ≥ 3 X.[2]: 5.8.2 There are three distinct bar–space patterns used to represent each codeword. These patterns are organized into three groups known asclusters. The clusters are labeled 0, 3, and 6. No bar–space pattern is used in more than one cluster. The rows of the symbol cycle through the three clusters, so row 1 uses patterns from cluster 0, row 2 uses cluster 3, row 3 uses cluster 6, and row 4 again uses cluster 0. Which cluster can be determined by an equation:[2]: 5.3.1 WhereKis the cluster number and thebirefer to the width of thei-th black bar in the symbol character (inXunits). Alternatively:[2]: 76–78 WhereEiis thei-th edge-to-next-same-edge distance. Odd indices are the leading edge of a bar to the leading edge of the next bar; even indices are for the trailing edges. One purpose of the three clusters is to determine which row (mod 3) the codeword is in. The clusters allow portions of the symbol to be read using a single scan line that may be skewed from the horizontal.[2]: 5.11.1For instance, the scan might start on row 6 at the start of the row but end on row 10. At the beginning of the scan, the scanner sees the constant start pattern, and then it sees symbols in cluster 6. When the skewed scan straddles rows 6 and 7, then the scanner sees noise. When the scan is on row 7, the scanner sees symbols in cluster 0. Consequently, the scanner knows the direction of the skew. By the time the scanner reaches the right, it is on row 10, so it sees cluster 0 patterns. The scanner will also see a constant stop pattern. Of the 929 available code words, 900 are used for data, and 29 for special functions, such as shifting between major modes. The three major modes encode different types of data in different ways, and can be mixed as necessary within a single bar code: When the PDF417 symbol is created, from 2 to 512 error detection and correction codewords are added. PDF417 usesReed–Solomon error correction. When the symbol is scanned, the maximum number of corrections that can be made is equal to the number of codewords added, but the standard recommends that two codewords be held back to ensure reliability of the corrected information. PDF417 is a stacked barcode that can be read with a simple linear scan being swept over the symbol.[3]Those linear scans need the left and right columns with the start and stop code words. Additionally, the scan needs to know what row it is scanning, so each row of the symbol must also encode its row number. Furthermore, the reader's line scan won't scan just a row; it will typically start scanning one row, but then cross over to a neighbor and possibly continuing on to cross successive rows. In order to minimize the effect of these crossings, the PDF417 modules are tall and narrow — the height is typically three times the width. Also, each code word must indicate which row it belongs to so crossovers, when they occur, can be detected. The code words are also designed to be delta-decodable, so some code words are redundant. Each PDF data code word represents about 10 bits of information (log2(900) ≈ 9.8), but the printed code word (character) is 17 modules wide. Including a height of 3 modules, a PDF417 code word takes 51 square modules to represent 10 bits. That area does not count other overhead such as the start, stop, row, format, and ECC information. Other 2D codes, such asDataMatrixandQR, are decoded with image sensors instead of uncoordinated linear scans. Those codes still need recognition and alignment patterns, but they do not need to be as prominent. An 8 bit code word will take 8 square modules (ignoring recognition, alignment, format, and ECC information). In practice, a PDF417 symbol takes about four times the area of a DataMatrix or QR Code.[4] In addition to features typical of two dimensional bar codes, PDF417's capabilities include: The introduction of the ISO/IEC document states:[2] Manufacturers of bar code equipment and users of bar code technology require publicly available standard symbology specifications to which they can refer when developing equipment and application standards. It is the intent and understanding of ISO/IEC that the symbology presented in this International Standard is entirely in the public domain and free of all user restrictions, licences and fees. PDF417 is used in many applications by both commercial and government organizations. PDF417 is one of the formats (along withData Matrix) that can be used to printpostageaccepted by theUnited States Postal Service. PDF417 is also used by the airline industry'sBar Coded Boarding Pass(BCBP) standard as the 2D bar code symbolism for paper boarding passes. PDF417 is the standard selected by theDepartment of Homeland Securityas the machine readable zone technology forRealIDcompliantdriver licensesand state issued identification cards. PDF417 barcodes are also included onvisasand border crossing cards issued by theState of Israel.
https://en.wikipedia.org/wiki/PDF417
Ingraph theory, theDulmage–Mendelsohn decompositionis a partition of the vertices of abipartite graphinto subsets, with the property that two adjacent vertices belong to the same subset if and only if they are paired with each other in aperfect matchingof the graph. It is named after A. L. Dulmage andNathan Mendelsohn, who published it in 1958.[1]A generalization to any graph is theEdmonds–Gallai decomposition, using theBlossom algorithm. The Dulmage-Mendelshon decomposition can be constructed as follows.[2](it is attributed to[3]who in turn attribute it to[4]). LetGbe a bipartite graph,Mamaximum-cardinality matchinginG, andV0the set of vertices ofGunmatched byM(the "free vertices"). ThenGcan be partitioned into three parts: An illustration is shown on the left. The bold lines are the edges ofM. The weak lines are other edges ofG. The red dots are the vertices ofV0. Note thatV0is contained inE, since it is reachable fromV0by a path of length 0. Based on this decomposition, the edges in G can be partitioned into six parts according to their endpoints:E-U, E-E, O-O, O-U, E-O, U-U. This decomposition has the following properties:[3] LetG= (X+Y,E) be a bipartite graph, and letDbe the set of vertices inGthat are not matched in at least onemaximum matchingofG. ThenDis necessarily anindependent set. SoGcan be partitioned into three parts: Every maximum matching inGconsists of matchings in the first and second part that match all neighbors ofD, together with aperfect matchingof the remaining vertices. IfGhas a perfect matching, then the third set containsallvertices of G. The third set of vertices in the coarse decomposition (or all vertices in a graph with a perfect matching) may additionally be partitioned into subsets by the following steps: To see that this subdivision into subsets characterizes the edges that belong to perfect matchings, suppose that two verticesxandyinGbelong to the same subset of the decomposition, but are not already matched by the initial perfect matching. Then there exists a strongly connected component inHcontaining edgex,y. This edge must belong to asimple cycleinH(by the definition of strong connectivity) which necessarily corresponds to an alternating cycle inG(a cycle whose edges alternate between matched and unmatched edges). This alternating cycle may be used to modify the initial perfect matching to produce a new matching containing edgex,y. An edgex,yof the graphGbelongs to all perfect matchings ofG, if and only ifxandyare the only members of their set in the decomposition. Such an edge exists if and only if thematching preclusionnumber of the graph is one. As another component of the Dulmage–Mendelsohn decomposition, Dulmage and Mendelsohn defined thecoreof a graph to be the union of its maximum matchings.[5]However, this concept should be distinguished from thecorein the sense of graph homomorphisms, and from thek-coreformed by the removal of low-degree vertices. This decomposition has been used to partition meshes infinite element analysis, and to determine specified, underspecified and overspecified equations in systems of nonlinear equations. It was also used for an algorithm forrank-maximal matching. In[6]there is a different decomposition of a bipartite graph, which is asymmetric - it distinguishes between vertices in one side of the graph and the vertices on the other side. It can be used to find a maximum-cardinalityenvy-free matchingin an unweighted bipartite graph, as well as a minimum-cost maximum-cardinality matching in a weighted bipartite graph.[6]
https://en.wikipedia.org/wiki/Dulmage%E2%80%93Mendelsohn_decomposition
Astatistical modelis amathematical modelthat embodies a set ofstatistical assumptionsconcerning the generation ofsample data(and similar data from a largerpopulation). A statistical model represents, often in considerably idealized form, thedata-generating process.[1]When referring specifically toprobabilities, the corresponding term isprobabilistic model. Allstatistical hypothesis testsand allstatistical estimatorsare derived via statistical models. More generally, statistical models are part of the foundation ofstatistical inference. A statistical model is usually specified as a mathematical relationship between one or morerandom variablesand other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman AdèrquotingKenneth Bollen).[2] Informally, a statistical model can be thought of as astatistical assumption(or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of anyevent. As an example, consider a pair of ordinary six-sideddice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is⁠1/6⁠. From that assumption, we can calculate the probability of both dice coming up 5:⁠1/6⁠×⁠1/6⁠=⁠1/36⁠.More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is⁠1/8⁠(because the dice areweighted). From that assumption, we can calculate the probability of both dice coming up 5:⁠1/8⁠×⁠1/8⁠=⁠1/64⁠.We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption doesnotconstitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. In mathematical terms, a statistical model is a pair (S,P{\displaystyle S,{\mathcal {P}}}), whereS{\displaystyle S}is the set of possible observations, i.e. thesample space, andP{\displaystyle {\mathcal {P}}}is a set ofprobability distributionsonS{\displaystyle S}.[3]The setP{\displaystyle {\mathcal {P}}}represents all of the models that are considered possible. This set is typically parameterized:P={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. The setΘ{\displaystyle \Theta }defines theparametersof the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e.Fθ1=Fθ2⇒θ1=θ2{\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}}(in other words, the mapping isinjective), it is said to beidentifiable.[3] In some cases, the model can be more complex. Suppose that we have a population of children, with the ages of the children distributeduniformly, in the population. The height of a child will bestochasticallyrelated to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in alinear regressionmodel, like this: heighti=b0+b1agei+ εi, whereb0is the intercept,b1is a parameter that age is multiplied by to obtain a prediction of height, εiis the error term, andiidentifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti=b0+b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To dostatistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εidistributions arei.i.d.Gaussian, with zero mean. In this instance, the model would have 3 parameters:b0,b1, and the variance of the Gaussian distribution. We can formally specify the model in the form (S,P{\displaystyle S,{\mathcal {P}}}) as follows. The sample space,S{\displaystyle S}, of our model comprises the set of all possible pairs (age, height). Each possible value ofθ{\displaystyle \theta }= (b0,b1,σ2) determines a distribution onS{\displaystyle S}; denote that distribution byFθ{\displaystyle F_{\theta }}. IfΘ{\displaystyle \Theta }is the set of all possible values ofθ{\displaystyle \theta }, thenP={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifyingS{\displaystyle S}and (2) making some assumptions relevant toP{\displaystyle {\mathcal {P}}}. There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specifyP{\displaystyle {\mathcal {P}}}—as they are required to do. A statistical model is a special class ofmathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables arestochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance,coin tossingis, in principle, a deterministic process; yet it is commonly modeled as stochastic (via aBernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statisticianSir David Coxhas said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[4] There are three purposes for a statistical model, according to Konishi & Kitagawa:[5] Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description.[6] Suppose that we have a statistical model (S,P{\displaystyle S,{\mathcal {P}}}) withP={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. In notation, we write thatΘ⊆Rk{\displaystyle \Theta \subseteq \mathbb {R} ^{k}}wherekis a positive integer (R{\displaystyle \mathbb {R} }denotes thereal numbers; other sets can be used, in principle). Here,kis called thedimensionof the model. The model is said to beparametricifΘ{\displaystyle \Theta }has finite dimension.[citation needed]As an example, if we assume that data arise from a univariateGaussian distribution, then we are assuming that In this example, the dimension,k, equals 2. As another example, suppose that the data consists of points (x,y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formallyθ∈Θ{\displaystyle \theta \in \Theta }is a single parameter that has dimensionk, it is sometimes regarded as comprisingkseparate parameters. For example, with the univariate Gaussian distribution,θ{\displaystyle \theta }is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model isnonparametricif the parameter setΘ{\displaystyle \Theta }is infinite dimensional. A statistical model issemiparametricif it has both finite-dimensional and infinite-dimensional parameters. Formally, ifkis the dimension ofΘ{\displaystyle \Theta }andnis the number of samples, both semiparametric and nonparametric models havek→∞{\displaystyle k\rightarrow \infty }asn→∞{\displaystyle n\rightarrow \infty }. Ifk/n→0{\displaystyle k/n\rightarrow 0}asn→∞{\displaystyle n\rightarrow \infty }, then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models,Sir David Coxhas said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[7] Two statistical models arenestedif the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model has, nested within it, the linear model —we constrain the parameterb2to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. Comparing statistical models is fundamental for much ofstatistical inference.Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following:R2,Bayes factor,Akaike information criterion, and thelikelihood-ratio testtogether with its generalization, therelative likelihood. Another way of comparing two statistical models is through the notion ofdeficiencyintroduced byLucien Le Cam.[8]
https://en.wikipedia.org/wiki/Statistical_model
Inmathematics, particularly in the area ofarithmetic, amodular multiplicative inverseof anintegerais an integerxsuch that the productaxiscongruentto 1 with respect to the modulusm.[1]In the standard notation ofmodular arithmeticthis congruence is written as which is the shorthand way of writing the statement thatmdivides (evenly) the quantityax− 1, or, put another way, the remainder after dividingaxby the integermis 1. Ifadoes have an inverse modulom, then there is an infinite number of solutions of this congruence, which form acongruence classwith respect to this modulus. Furthermore, any integer that is congruent toa(i.e., ina's congruence class) has any element ofx's congruence class as a modular multiplicative inverse. Using the notation ofw¯{\displaystyle {\overline {w}}}to indicate the congruence class containingw, this can be expressed by saying that themodulo multiplicative inverseof the congruence classa¯{\displaystyle {\overline {a}}}is the congruence classx¯{\displaystyle {\overline {x}}}such that: where the symbol⋅m{\displaystyle \cdot _{m}}denotes the multiplication of equivalence classes modulom.[2]Written in this way, the analogy with the usual concept of amultiplicative inversein the set ofrationalorreal numbersis clearly represented, replacing the numbers by congruence classes and altering thebinary operationappropriately. As with the analogous operation on the real numbers, a fundamental use of this operation is in solving, when possible, linear congruences of the form Finding modular multiplicative inverses also has practical applications in the field ofcryptography, e.g.public-key cryptographyand theRSA algorithm.[3][4][5]A benefit for the computer implementation of these applications is that there exists a very fast algorithm (theextended Euclidean algorithm) that can be used for the calculation of modular multiplicative inverses. For a given positive integerm, two integers,aandb, are said to becongruent modulomifmdivides their difference. Thisbinary relationis denoted by, This is anequivalence relationon the set of integers,Z{\displaystyle \mathbb {Z} }, and the equivalence classes are calledcongruence classes modulomorresidue classes modulom. Leta¯{\displaystyle {\overline {a}}}denote the congruence class containing the integera,[6]then Alinear congruenceis a modular congruence of the form Unlike linear equations over the reals, linear congruences may have zero, one or several solutions. Ifxis a solution of a linear congruence then every element inx¯{\displaystyle {\overline {x}}}is also a solution, so, when speaking of the number of solutions of a linear congruence we are referring to the number of different congruence classes that contain solutions. Ifdis thegreatest common divisorofaandmthen the linear congruenceax≡b(modm)has solutions if and only ifddividesb. Ifddividesb, then there are exactlydsolutions.[7] A modular multiplicative inverse of an integerawith respect to the modulusmis a solution of the linear congruence The previous result says that a solution exists if and only ifgcd(a,m) = 1, that is,aandmmust berelatively prime(i.e. coprime). Furthermore, when this condition holds, there is exactly one solution, i.e., when it exists, a modular multiplicative inverse is unique:[8]Ifbandb'are both modular multiplicative inverses ofarespect to the modulusm, then therefore Ifa≡ 0 (modm), thengcd(a,m) =m, andawon't even have a modular multiplicative inverse. Therefore,b ≡ b'(modm). Whenax≡ 1 (modm)has a solution it is often denoted in this way − but this can be considered anabuse of notationsince it could be misinterpreted as thereciprocalofa{\displaystyle a}(which, contrary to the modular multiplicative inverse, is not an integer except whenais 1 or −1). The notation would be proper ifais interpreted as a token standing for the congruence classa¯{\displaystyle {\overline {a}}}, as the multiplicative inverse of a congruence class is a congruence class with the multiplication defined in the next section. The congruence relation, modulom, partitions the set of integers intomcongruence classes. Operations of addition and multiplication can be defined on thesemobjects in the following way: To either add or multiply two congruence classes, first pick a representative (in any way) from each class, then perform the usual operation for integers on the two representatives and finally take the congruence class that the result of the integer operation lies in as the result of the operation on the congruence classes. In symbols, with+m{\displaystyle +_{m}}and⋅m{\displaystyle \cdot _{m}}representing the operations on congruence classes, these definitions are and These operations arewell-defined, meaning that the end result does not depend on the choices of representatives that were made to obtain the result. Themcongruence classes with these two defined operations form aring, called thering of integers modulom. There are several notations used for these algebraic objects, most oftenZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }orZ/m{\displaystyle \mathbb {Z} /m}, but several elementary texts and application areas use a simplified notationZm{\displaystyle \mathbb {Z} _{m}}when confusion with other algebraic objects is unlikely. The congruence classes of the integers modulomwere traditionally known asresidue classes modulo m, reflecting the fact that all the elements of a congruence class have the same remainder (i.e., "residue") upon being divided bym. Any set ofmintegers selected so that each comes from a different congruence class modulo m is called acomplete system of residues modulom.[9]Thedivision algorithmshows that the set of integers,{0, 1, 2, ...,m− 1}form a complete system of residues modulom, known as theleast residue system modulom. In working with arithmetic problems it is sometimes more convenient to work with a complete system of residues and use the language of congruences while at other times the point of view of the congruence classes of the ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is more useful.[10] Not every element of a complete residue system modulomhas a modular multiplicative inverse, for instance, zero never does. After removing the elements of a complete residue system that are not relatively prime tom, what is left is called areduced residue system, all of whose elements have modular multiplicative inverses. The number of elements in a reduced residue system isϕ(m){\displaystyle \phi (m)}, whereϕ{\displaystyle \phi }is theEuler totient function, i.e., the number of positive integers less thanmthat are relatively prime tom. In a generalring with unitynot every element has amultiplicative inverseand those that do are calledunits. As the product of two units is a unit, the units of a ring form agroup, thegroup of units of the ringand often denoted byR×ifRis the name of the ring. The group of units of the ring of integers modulomis called themultiplicative group of integers modulom, and it isisomorphicto a reduced residue system. In particular, it hasorder(size),ϕ(m){\displaystyle \phi (m)}. In the case thatmis aprime, sayp, thenϕ(p)=p−1{\displaystyle \phi (p)=p-1}and all the non-zero elements ofZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }have multiplicative inverses, thusZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }is afinite field. In this case, the multiplicative group of integers modulopform acyclic groupof orderp− 1. For any integern>1{\displaystyle n>1}, it's always the case thatn2−n+1{\displaystyle n^{2}-n+1}is the modular multiplicative inverse ofn+1{\displaystyle n+1}with respect to the modulusn2{\displaystyle n^{2}}, since(n+1)(n2−n+1)=n3+1{\displaystyle (n+1)(n^{2}-n+1)=n^{3}+1}. Examples are3×3≡1(mod4){\displaystyle 3\times 3\equiv 1{\pmod {4}}},4×7≡1(mod9){\displaystyle 4\times 7\equiv 1{\pmod {9}}},5×13≡1(mod16){\displaystyle 5\times 13\equiv 1{\pmod {16}}}and so on. The following example uses the modulus 10: Two integers are congruent mod 10 if and only if their difference is divisible by 10, for instance Some of the ten congruence classes with respect to this modulus are: The linear congruence4x≡ 5 (mod 10)has no solutions since the integers that are congruent to 5 (i.e., those in5¯{\displaystyle {\overline {5}}}) are all odd while4xis always even. However, the linear congruence4x≡ 6 (mod 10)has two solutions, namely,x= 4andx= 9. Thegcd(4, 10) = 2and 2 does not divide 5, but does divide 6. Sincegcd(3, 10) = 1, the linear congruence3x≡ 1 (mod 10)will have solutions, that is, modular multiplicative inverses of 3 modulo 10 will exist. In fact, 7 satisfies this congruence (i.e., 21 − 1 = 20). However, other integers also satisfy the congruence, for instance 17 and −3 (i.e., 3(17) − 1 = 50 and 3(−3) − 1 = −10). In particular, every integer in7¯{\displaystyle {\overline {7}}}will satisfy the congruence since these integers have the form7 + 10rfor some integerrand is divisible by 10. This congruence has only this one congruence class of solutions. The solution in this case could have been obtained by checking all possible cases, but systematic algorithms would be needed for larger moduli and these will be given in the next section. The product of congruence classes5¯{\displaystyle {\overline {5}}}and8¯{\displaystyle {\overline {8}}}can be obtained by selecting an element of5¯{\displaystyle {\overline {5}}}, say 25, and an element of8¯{\displaystyle {\overline {8}}}, say −2, and observing that their product (25)(−2) = −50 is in the congruence class0¯{\displaystyle {\overline {0}}}. Thus,5¯⋅108¯=0¯{\displaystyle {\overline {5}}\cdot _{10}{\overline {8}}={\overline {0}}}. Addition is defined in a similar way. The ten congruence classes together with these operations of addition and multiplication of congruence classes form the ring of integers modulo 10, i.e.,Z/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }. A complete residue system modulo 10 can be the set {10, −9, 2, 13, 24, −15, 26, 37, 8, 9} where each integer is in a different congruence class modulo 10. The unique least residue system modulo 10 is {0, 1, 2, ..., 9}. A reduced residue system modulo 10 could be {1, 3, 7, 9}. The product of any two congruence classes represented by these numbers is again one of these four congruence classes. This implies that these four congruence classes form a group, in this case the cyclic group of order four, having either 3 or 7 as a (multiplicative) generator. The represented congruence classes form the group of units of the ringZ/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }. These congruence classes are precisely the ones which have modular multiplicative inverses. A modular multiplicative inverse ofamodulomcan be found by using the extended Euclidean algorithm. TheEuclidean algorithmdetermines the greatest common divisor (gcd) of two integers, sayaandm. Ifahas a multiplicative inverse modulom, this gcd must be 1. The last of several equations produced by the algorithm may be solved for this gcd. Then, using a method called "back substitution", an expression connecting the original parameters and this gcd can be obtained. In other words, integersxandycan be found to satisfyBézout's identity, Rewritten, this is that is, so, a modular multiplicative inverse ofahas been calculated. A more efficient version of the algorithm is the extended Euclidean algorithm, which, by using auxiliary equations, reduces two passes through the algorithm (back substitution can be thought of as passing through the algorithm in reverse) to just one. Inbig O notation, this algorithm runs in timeO(log2(m)), assuming|a| <m, and is considered to be very fast and generally more efficient than its alternative, exponentiation. As an alternative to the extended Euclidean algorithm, Euler's theorem may be used to compute modular inverses.[11] According toEuler's theorem, ifaiscoprimetom, that is,gcd(a,m) = 1, then whereϕ{\displaystyle \phi }isEuler's totient function. This follows from the fact thatabelongs to the multiplicative group(Z/mZ){\displaystyle (\mathbb {Z} /m\mathbb {Z} )}×if and only ifaiscoprimetom. Therefore, a modular multiplicative inverse can be found directly: In the special case wheremis a prime,ϕ(m)=m−1{\displaystyle \phi (m)=m-1}and a modular inverse is given by This method is generally slower than the extended Euclidean algorithm, but is sometimes used when an implementation for modular exponentiation is already available. Some disadvantages of this method include: One notableadvantageof this technique is that there are no conditional branches which depend on the value ofa, and thus the value ofa, which may be an important secret inpublic-key cryptography, can be protected fromside-channel attacks. For this reason, the standard implementation ofCurve25519uses this technique to compute an inverse. It is possible to compute the inverse of multiple numbersai, modulo a commonm, with a single invocation of the Euclidean algorithm and three multiplications per additional input.[12]The basic idea is to form the product of all theai, invert that, then multiply byajfor allj≠ito leave only the desireda−1i. More specifically, the algorithm is (all arithmetic performed modulom): It is possible to perform the multiplications in a tree structure rather than linearly to exploitparallel computing. Finding a modular multiplicative inverse has many applications in algorithms that rely on the theory of modular arithmetic. For instance, in cryptography the use of modular arithmetic permits some operations to be carried out more quickly and with fewer storage requirements, while other operations become more difficult.[13]Both of these features can be used to advantage. In particular, in the RSA algorithm, encrypting and decrypting a message is done using a pair of numbers that are multiplicative inverses with respect to a carefully selected modulus. One of these numbers is made public and can be used in a rapid encryption procedure, while the other, used in the decryption procedure, is kept hidden. Determining the hidden number from the public number is considered to be computationally infeasible and this is what makes the system work to ensure privacy.[14] As another example in a different context, consider the exact division problem in computer science where you have a list of odd word-sized numbers each divisible bykand you wish to divide them all byk. One solution is as follows: On many machines, particularly those without hardware support for division, division is a slower operation than multiplication, so this approach can yield a considerable speedup. The first step is relatively slow but only needs to be done once. Modular multiplicative inverses are used to obtain a solution of a system of linear congruences that is guaranteed by theChinese Remainder Theorem. For example, the system has common solutions since 5,7 and 11 are pairwisecoprime. A solution is given by where Thus, and in its unique reduced form since 385 is theLCMof 5,7 and 11. Also, the modular multiplicative inverse figures prominently in the definition of theKloosterman sum.
https://en.wikipedia.org/wiki/Modular_inverse
ECRYPT(European Network of Excellence in Cryptology) was a 4-yearEuropeanresearch initiative launched on 1 February 2004 with the stated objective of promoting the collaboration of European researchers ininformation security, and especially incryptologyanddigital watermarking. ECRYPT listed five core research areas, termed "virtual laboratories":symmetric key algorithms(STVL),public key algorithms(AZTEC),protocol(PROVILAB), secure and efficient implementations (VAMPIRE) andwatermarking(WAVILA). In August 2008 the network started another 4-year phase asECRYPT II. During the project, algorithms and key lengths were evaluated yearly. The most recent of these documents is dated 30 September 2012.[1] Considering the budget of a large intelligence agency to be about US$300 million for a singleASICmachine, the recommendedminimumkey size is 84 bits, which would give protection for a few months. In practice, most commonly used algorithms have key sizes of 128 bits or more, providing sufficient security also in the case that the chosen algorithm is slightly weakened by cryptanalysis. Different kinds of keys are compared in the document (e.g. RSA keys vs.ECkeys). This "translation table" can be used to roughly equate keys of other types of algorithms with symmetric encryption algorithms. In short, 128 bit symmetric keys are said to be equivalent to 3248 bits RSA keys or 256-bit EC keys. Symmetric keys of 256 bits are roughly equivalent to 15424 bit RSA keys or 512 bit EC keys. Finally 2048 bit RSA keys are said to be equivalent to 103 bit symmetric keys. Among key sizes, 8 security levels are defined, from the lowest "Attacks possible in real-time by individuals" (level 1, 32 bits) to "Good for the foreseeable future, also against quantum computers unlessShor's algorithmapplies" (level 8, 256 bits). For general long-term protection (30 years), 128 bit keys are recommended (level 7). Many different primitives and algorithms are evaluated. The primitives are: Note that the list of algorithms and schemes is non-exhaustive (the document contains more algorithms than are mentioned here). This document, dated 11 January 2013, provides "an exhaustive overview of every computational assumption that has been used in public key cryptography."[2] The "Vampire lab" produced over 80 peer-reviewed and joined authored publications during the four years of the project. This final document looks back on results and discusses newly arising research directions. The goals were to advance attacks and countermeasures; bridging the gap between cryptographic protocol designers and smart card implementers; and to investigate countermeasures against power analysis attacks (contact-based and contact-less).[3]
https://en.wikipedia.org/wiki/ECRYPT
Ininformation retrieval,tf–idf(alsoTF*IDF,TFIDF,TF–IDF, orTf–idf), short forterm frequency–inverse document frequency, is a measure of importance of a word to adocumentin a collection orcorpus, adjusted for the fact that some words appear more frequently in general.[1]Like the bag-of-words model, it models a document as amultisetof words, withoutword order. It is a refinement over the simplebag-of-words model, by allowing the weight of words to depend on the rest of the corpus. It was often used as aweighting factorin searches of information retrieval,text mining, anduser modeling. A survey conducted in 2015 showed that 83% of text-based recommender systems in digital libraries used tf–idf.[2]Variations of the tf–idf weighting scheme were often used bysearch enginesas a central tool in scoring and ranking a document'srelevancegiven a userquery. One of the simplestranking functionsis computed by summing the tf–idf for each query term; many more sophisticated ranking functions are variants of this simple model. Karen Spärck Jones(1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:[3] The specificity of a term can be quantified as an inverse function of the number of documents in which it occurs. For example, the df (document frequency) and idf for some words in Shakespeare's 37 plays are as follows:[4] We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is. Term frequency,tf(t,d), is the relative frequency of termtwithin documentd, whereft,dis theraw countof a term in a document, i.e., the number of times that termtoccurs in documentd. Note the denominator is simply the total number of terms in documentd(counting each occurrence of the same term separately). There are various other ways to define term frequency:[5]: 128 Theinverse document frequencyis a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is thelogarithmically scaledinverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient): with Then tf–idf is calculated as A high weight in tf–idf is reached by a high termfrequency(in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0. Idf was introduced as "term specificity" byKaren Spärck Jonesin a 1972 paper. Although it has worked well as aheuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to findinformation theoreticjustifications for it.[7] Spärck Jones's own explanation did not propose much theory, aside from a connection toZipf's law.[7]Attempts have been made to put idf on aprobabilisticfooting,[8]by estimating the probability that a given documentdcontains a termtas the relative document frequency, so that we can define idf as Namely, the inverse document frequency is the logarithm of "inverse" relative document frequency. This probabilistic interpretation in turn takes the same form as that ofself-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriateevent spacesfor the requiredprobability distributions: not only documents need to be taken into account, but also queries and terms.[7] Both term frequency and inverse document frequency can be formulated in terms ofinformation theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distributionp(d,t){\displaystyle p(d,t)}is that: This assumption and its implications, according to Aizawa: "represent the heuristic that tf–idf employs."[9] Theconditional entropyof a "randomly chosen" document in the corpusD{\displaystyle D}, conditional to the fact it contains a specific termt{\displaystyle t}(and assuming that all documents have equal probability to be chosen) is: In terms of notation,D{\displaystyle {\cal {D}}}andT{\displaystyle {\cal {T}}}are "random variables" corresponding to respectively draw a document or a term. Themutual informationcan be expressed as The last step is to expandpt{\displaystyle p_{t}}, the unconditional probability to draw a term, with respect to the (random) choice of a document, to obtain: This expression shows that summing the Tf–idf of all possible terms and documents recovers the mutual information between documents and term taking into account all the specificities of their joint distribution.[9]Each Tf–idf hence carries the "bit of information" attached to a term x document pair. Suppose that we have term count tables of a corpus consisting of only two documents, as listed on the right. The calculation of tf–idf for the term "this" is performed as follows: In its raw frequency form, tf is just the frequency of the "this" for each document. In each document, the word "this" appears once; but as the document 2 has more words, its relative frequency is smaller. An idf is constant per corpus, andaccountsfor the ratio of documents that include the word "this". In this case, we have a corpus of two documents and all of them include the word "this". So tf–idf is zero for the word "this", which implies that the word is not very informative as it appears in all documents. The word "example" is more interesting - it occurs three times, but only in the second document: Finally, (using thebase 10 logarithm). The idea behind tf–idf also applies to entities other than terms. In 1998, the concept of idf was applied to citations.[10]The authors argued that "if a very uncommon citation is shared by two documents, this should be weighted more highly than a citation made by a large number of documents". In addition, tf–idf was applied to "visual words" with the purpose of conducting object matching in videos,[11]and entire sentences.[12]However, the concept of tf–idf did not prove to be more effective in all cases than a plain tf scheme (without idf). When tf–idf was applied to citations, researchers could find no improvement over a simple citation-count weight that had no idf component.[13] A number of term-weighting schemes have derived from tf–idf. One of them is TF–PDF (term frequency * proportional document frequency).[14]TF–PDF was introduced in 2001 in the context of identifying emerging topics in the media. The PDF component measures the difference of how often a term occurs in different domains. Another derivate is TF–IDuF. In TF–IDuF,[15]idf is not calculated based on the document corpus that is to be searched or recommended. Instead, idf is calculated on users' personal document collections. The authors report that TF–IDuF was equally effective as tf–idf but could also be applied in situations when, e.g., a user modeling system has no access to a global document corpus.
https://en.wikipedia.org/wiki/Term_frequency–inverse_document_frequency
Abirthday attackis a bruteforcecollision attackthat exploits the mathematics behind thebirthday probleminprobability theory. This attack can be used to abuse communication between two or more parties. The attack depends on the higher likelihood of collisions found between random attack attempts and a fixed degree of permutations (pigeonholes). LetH{\textstyle H}be the number of possible values of a hash function, withH=2l{\textstyle H=2^{l}}. With a birthday attack, it is possible to find acollision of a hash functionwith50%{\textstyle 50\%}chance in2l=2l/2,{\textstyle {\sqrt {2^{l}}}=2^{l/2},}wherel{\textstyle l}is the bit length of the hash output,[1][2]and with2l−1{\textstyle 2^{l-1}}being the classicalpreimage resistancesecurity with the same probability.[2]There is a general (though disputed[3])resultthat quantum computers can perform birthday attacks, thus breaking collision resistance, in2l3=2l/3{\textstyle {\sqrt[{3}]{2^{l}}}=2^{l/3}}.[4] Although there are somedigital signaturevulnerabilities associated with the birthday attack, it cannot be used to break an encryption scheme any faster than abrute-force attack.[5]: 36 As an example, consider the scenario in which a teacher with a class of 30 students (n = 30) asks for everybody's birthday (for simplicity, ignoreleap years) to determine whether any two students have the same birthday (corresponding to a hash collision as described further). Intuitively, this chance may seem small. Counter-intuitively, the probability that at least one student has the same birthday asanyother student on any day is around 70% (for n = 30), from the formula1−365!(365−n)!⋅365n{\displaystyle 1-{\frac {365!}{(365-n)!\cdot 365^{n}}}}.[6] If the teacher had picked aspecificday (say, 16 September), then the chance that at least one student was born on that specific day is1−(364/365)30{\displaystyle 1-(364/365)^{30}}, about 7.9%. In a birthday attack, the attacker prepares many different variants of benign and malicious contracts, each having adigital signature. A pair of benign and malicious contracts with the same signature is sought. In this fictional example, suppose that the digital signature of a string is the first byte of itsSHA-256hash. The pair found is indicated in green – note that finding a pair of benign contracts (blue) or a pair of malicious contracts (red) is useless. After the victim accepts the benign contract, the attacker substitutes it with the malicious one and claims the victim signed it, as proven by the digital signature. The birthday attack can be modelled as a variation of theballs into bins problem, where balls (hash function inputs) are randomly placed into bins (hash function outputs). A hash collision occurs when at least two balls are placed into the same bin. Given a functionf{\displaystyle f}, the goal of the attack is to find two different inputsx1,x2{\displaystyle x_{1},x_{2}}such thatf(x1)=f(x2){\displaystyle f(x_{1})=f(x_{2})}. Such a pairx1,x2{\displaystyle x_{1},x_{2}}is called a collision. The method used to find a collision is simply to evaluate the functionf{\displaystyle f}for different input values that may be chosen randomly or pseudorandomly until the same result is found more than once. Because of the birthday problem, this method can be rather efficient. Specifically, if afunctionf(x){\displaystyle f(x)}yields any ofH{\displaystyle H}different outputs with equal probability andH{\displaystyle H}is sufficiently large, then we expect to obtain a pair of different argumentsx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}withf(x1)=f(x2){\displaystyle f(x_{1})=f(x_{2})}after evaluating the function for about1.25H{\displaystyle 1.25{\sqrt {H}}}different arguments on average. We consider the following experiment. From a set ofHvalues we choosenvalues uniformly at random thereby allowing repetitions. Letp(n;H) be the probability that during this experiment at least one value is chosen more than once. This probability can be approximated as wheren{\displaystyle n}is the number of chosen values (inputs) andH{\displaystyle H}is the number of possible outcomes (possible hash outputs). Letn(p;H) be the smallest number of values we have to choose, such that the probability for finding a collision is at leastp. By inverting this expression above, we find the following approximation and assigning a 0.5 probability of collision we arrive at LetQ(H) be the expected number of values we have to choose before finding the first collision. This number can be approximated by As an example, if a 64-bit hash is used, there are approximately1.8×1019different outputs. If these are all equally probable (the best case), then it would take 'only' approximately 5 billion attempts (5.38×109) to generate a collision using brute force.[8]This value is calledbirthday bound[9]and it could be approximated as 2l/2, wherelis the number of bits in H.[10]Other examples are as follows: It is easy to see that if the outputs of the function are distributed unevenly, then a collision could be found even faster. The notion of 'balance' of a hash function quantifies the resistance of the function to birthday attacks (exploiting uneven key distribution.) However, determining the balance of a hash function will typically require all possible inputs to be calculated and thus is infeasible for popular hash functions such as the MD and SHA families.[12]The subexpressionln⁡11−p{\displaystyle \ln {\frac {1}{1-p}}}in the equation forn(p;H){\displaystyle n(p;H)}is not computed accurately for smallp{\displaystyle p}when directly translated into common programming languages aslog(1/(1-p))due toloss of significance. Whenlog1pis available (as it is inC99) for example, the equivalent expression-log1p(-p)should be used instead.[13]If this is not done, the first column of the above table is computed as zero, and several items in the second column do not have even one correct significant digit. A goodrule of thumbwhich can be used formental calculationis the relation which can also be written as or This works well for probabilities less than or equal to 0.5. This approximation scheme is especially easy to use when working with exponents. For instance, suppose you are building 32-bit hashes (H=232{\displaystyle H=2^{32}}) and want the chance of a collision to be at most one in a million (p≈2−20{\displaystyle p\approx 2^{-20}}), how many documents could we have at the most? which is close to the correct answer of 93. Digital signaturescan be susceptible to a birthday attack or more precisely a chosen-prefix collision attack. A messagem{\displaystyle m}is typically signed by first computingf(m){\displaystyle f(m)}, wheref{\displaystyle f}is acryptographic hash function, and then using some secret key to signf(m){\displaystyle f(m)}. SupposeMallorywants to trickBobinto signing afraudulentcontract. Mallory prepares a fair contractm{\displaystyle m}and a fraudulent onem′{\displaystyle m'}. She then finds a number of positions wherem{\displaystyle m}can be changed without changing the meaning, such as inserting commas, empty lines, one versus two spaces after a sentence, replacing synonyms, etc. By combining these changes, she can create a huge number of variations onm{\displaystyle m}which are all fair contracts. In a similar manner, Mallory also creates a huge number of variations on the fraudulent contractm′{\displaystyle m'}. She then applies the hash function to all these variations until she finds a version of the fair contract and a version of the fraudulent contract which have the same hash value,f(m)=f(m′){\displaystyle f(m)=f(m')}. She presents the fair version to Bob for signing. After Bob has signed, Mallory takes the signature and attaches it to the fraudulent contract. This signature then "proves" that Bob signed the fraudulent contract. The probabilities differ slightly from the original birthday problem, as Mallory gains nothing by finding two fair or two fraudulent contracts with the same hash. Mallory's strategy is to generate pairs of one fair and one fraudulent contract. For a given hash function2l{\displaystyle 2^{l}}is the number of possible hashes, wherel{\displaystyle l}is the bit length of the hash output. The birthday problem equations do not exactly apply here. For a 50% chance of a collision, Mallory would need to generate approximately2(l/2)+1{\displaystyle 2^{(l/2)+1}}hashes, which is twice the number required for a simple collision under the classical birthday problem. To avoid this attack, the output length of the hash function used for a signature scheme can be chosen large enough so that the birthday attack becomes computationally infeasible, i.e. about twice as many bits as are needed to prevent an ordinarybrute-force attack. Besides using a larger bit length, the signer (Bob) can protect himself by making some random, inoffensive changes to the document before signing it, and by keeping a copy of the contract he signed in his own possession, so that he can at least demonstrate in court that his signature matches that contract, not just the fraudulent one. Pollard's rho algorithm for logarithmsis an example for an algorithm using a birthday attack for the computation ofdiscrete logarithms. The same fraud is possible if the signer is Mallory, not Bob. Bob could suggest a contract to Mallory for a signature. Mallory could find both an inoffensively-modified version of this fair contract that has the same signature as a fraudulent contract, and Mallory could provide the modified fair contract and signature to Bob. Later, Mallory could produce the fraudulent copy. If Bob doesn't have the inoffensively-modified version contract (perhaps only finding their original proposal), Mallory's fraud is perfect. If Bob does have it, Mallory can at least claim that it is Bob who is the fraudster.
https://en.wikipedia.org/wiki/Birthday_attack
Product managementis the business process of planning, developing, launching, and managing a product or service. It includes the entire lifecycle of a product, from ideation to development togo to market.Product managersare responsible for ensuring that a product meets the needs of its target market and contributes to the business strategy, while managing a product or products at all stages of theproduct lifecycle.Software product managementadapts the fundamentals of product management for digital products. The concept of product management originates from a 1931 memo by Procter & Gamble PresidentNeil H. McElroy. McElroy, requesting additional employees focused on brand management, needed "Brand Men" who would take on the role of managing products, packaging, positioning, distribution, and sales performance. The memo defined a brand man's work as: In modern terms, McElroy defined the role as: analyzing product distribution, optimize working distribution strategies, diagnosing and solving distribution issues, optimize product positioning and product marketing, and collaborate with regional distribution managers. Product managersare responsible for managing a company's product line on a day-to-day basis. As a result, product managers are critical in driving a company's growth, margins, and revenue. They are responsible for the business case, conceptualizing, planning,product development,product marketing, and delivering products to their target market. Depending on the company's size, industry, and history, product management has a variety of functions and roles. Frequently there is anincome statement(or profit and loss) responsibility as a key metric for evaluating product manager performance. Product managers analyze information including customer research, competitive intelligence, industry analysis, trends, economic signals, and competitive activity,[1]as well as documenting requirements, settingproduct strategy, and creating the roadmap. Product managers align across departments within their company including product design and development, marketing, sales, customer support, and legal.
https://en.wikipedia.org/wiki/Product_management
Incomputer science,empirical algorithmics(orexperimental algorithmics) is the practice of usingempirical methodsto study the behavior ofalgorithms. The practice combines algorithm development and experimentation: algorithms are not just designed, but also implemented and tested in a variety of situations. In this process, an initial design of an algorithm is analyzed so that the algorithm may be developed in a stepwise manner.[1] Methods from empirical algorithmics complement theoretical methods for theanalysis of algorithms.[2]Through the principled application of empirical methods, particularly fromstatistics, it is often possible to obtain insights into the behavior of algorithms such as high-performanceheuristic algorithmsfor hardcombinatorial problemsthat are (currently) inaccessible to theoretical analysis.[3]Empirical methods can also be used to achieve substantial improvements inalgorithmic efficiency.[4] American computer scientistCatherine McGeochidentifies two main branches of empirical algorithmics: the first (known asempirical analysis) deals with the analysis and characterization of the behavior ofalgorithms, and the second (known asalgorithm designoralgorithm engineering) is focused on empirical methods for improving the performance ofalgorithms.[5]The former often relies on techniques and tools fromstatistics, while the latter is based on approaches fromstatistics,machine learningandoptimization.Dynamic analysistools, typicallyperformance profilers, are commonly used when applying empirical methods for the selection and refinement of algorithms of various types for use in various contexts.[6][7][8] Research in empirical algorithmics is published in several journals, including theACM Journal on Experimental Algorithmics(JEA) and theJournal of Artificial Intelligence Research(JAIR). Besides Catherine McGeoch, well-known researchers in empirical algorithmics includeBernard Moret,Giuseppe F. Italiano,Holger H. Hoos,David S. Johnson, andRoberto Battiti.[9] In the absence of empirical algorithmics, analyzing the complexity of an algorithm can involve various theoretical methods applicable to various situations in which the algorithm may be used.[10]Memory and cache considerations are often significant factors to be considered in the theoretical choice of a complex algorithm, or the approach to its optimization, for a given purpose.[11][12]Performanceprofilingis adynamic program analysistechnique typically used for finding and analyzing bottlenecks in an entire application's code[13][14][15]or for analyzing an entire application to identify poorly performing code.[16]A profiler can reveal the code most relevant to an application's performance issues.[17] A profiler may help to determine when to choose one algorithm over another in a particular situation.[18]When an individual algorithm is profiled, as with complexity analysis, memory and cache considerations are often more significant than instruction counts or clock cycles; however, the profiler's findings can be considered in light of how the algorithm accesses data rather than the number of instructions it uses.[19] Profiling may provide intuitive insight into an algorithm's behavior[20]by revealing performance findings as a visual representation.[21]Performance profiling has been applied, for example, during the development of algorithms formatching wildcards. Early algorithms for matching wildcards, such asRich Salz'wildmatalgorithm,[22]typically relied onrecursion, a technique criticized on grounds of performance.[23]TheKrauss matching wildcards algorithmwas developed based on an attempt to formulate a non-recursive alternative usingtest cases[24]followed by optimizations suggested via performance profiling,[25]resulting in a new algorithmic strategy conceived in light of the profiling along with other considerations.[26]Profilers that collect data at the level ofbasic blocks[27]or that rely on hardware assistance[28]provide results that can be accurate enough to assist software developers in optimizing algorithms for a particular computer or situation.[29]Performance profiling can aid developer understanding of the characteristics of complex algorithms applied in complex situations, such ascoevolutionaryalgorithms applied to arbitrary test-based problems, and may help lead to design improvements.[30]
https://en.wikipedia.org/wiki/Empirical_algorithmics
Worldwide Interoperability for Microwave Access(WiMAX) is a family ofwireless broadbandcommunication standards based on theIEEE 802.16set of standards, which provide physical layer (PHY) andmedia access control(MAC) options. TheWiMAX Forumwas formed in June 2001 to promote conformity and interoperability, including the definition of system profiles for commercial vendors.[1]The forum describes WiMAX as "a standards-based technology enabling the delivery oflast milewireless broadband accessas an alternative tocableandDSL".[2] WiMAX was initially designed to provide 30 to 40 megabit-per-second data rates,[3]with the 2011 update providing up to 1 Gbit/s[3]for fixed stations.IEEE 802.16mor Wireless MAN-Advanced was a candidate for4G, in competition with theLTE Advancedstandard. WiMAX release 2.1, popularly branded asWiMAX 2+, is a backwards-compatible transition from previous WiMAX generations. It is compatible and interoperable withTD-LTE. Newer versions, still backward compatible, include WiMAX release 2.2 (2014) and WiMAX release 3 (2021, adds interoperation with5G NR). WiMAX refers to interoperable implementations of theIEEE 802.16family of wireless-networks standards ratified by the WiMAX Forum. (Similarly,Wi-Firefers to interoperable implementations of theIEEE 802.11Wireless LAN standards certified by theWi-Fi Alliance.) WiMAX Forum certification allows vendors to sell fixed or mobile products as WiMAX certified, thus ensuring a level of interoperability with other certified products, as long as they fit the same profile. The original IEEE 802.16 standard (now called "Fixed WiMAX") was published in 2001. WiMAX adopted some of its technology fromWiBro, a service marketed in Korea.[4] Mobile WiMAX (originally based on 802.16e-2005) is the revision that was deployed in many countries and is the basis for future revisions such as 802.16m-2011. WiMAX was sometimes referred to as "Wi-Fi on steroids"[5]and can be used for a number of applications including broadband connections, cellularbackhaul,hotspots, etc. It is similar toLong-range Wi-Fi, but it can enable usage at much greater distances.[6] The scalable physical layer architecture that allows for data rate to scale easily with available channel bandwidth and range of WiMAX make it suitable for the following potential applications: WiMAX can provide at-home or mobileInternet accessacross whole cities or countries. In many cases, this has resulted in competition in markets which typically only had access through an existing incumbent DSL (or similar) operator. Additionally, given the relatively low costs associated with the deployment of a WiMAX network (in comparison with3G,HSDPA,xDSL,HFCorFTTx), it is now economically viable to provide last-mile broadband Internet access in remote locations. Mobile WiMAX was a replacement candidate forcellular phonetechnologies such asGSMandCDMA, or can be used as an overlay to increase capacity. Fixed WiMAX is also considered as a wirelessbackhaultechnology for2G,3G, and4Gnetworks in both developed and developing nations.[7][8] In North America, backhaul for urban operations is typically provided via one or morecopper wireline connections, whereas remote cellular operations are sometimes backhauled via satellite. In other regions, urban and rural backhaul is usually provided bymicrowave links. (The exception to this is where the network is operated by an incumbent with ready access to the copper network.) WiMAX has more substantial backhaul bandwidth requirements than legacy cellular applications. Consequently, the use of wireless microwave backhaul is on the rise in North America and existing microwave backhaul links in all regions are being upgraded.[9]Capacities of between 34 Mbit/s and 1 Gbit/s[10]are routinely being deployed with latencies in the order of 1 ms. In many cases, operators are aggregating sites using wireless technology and then presenting traffic on to fiber networks where convenient. WiMAX in this application competes withmicrowave radio,E-lineand simple extension of the fiber network itself. WiMAX directly supports the technologies that maketriple-playservice offerings possible (such asquality of serviceandmulticast). These are inherent to the WiMAX standard rather than being added on ascarrier Ethernetis toEthernet. On May 7, 2008, in the United States,Sprint Nextel,Google,Intel,Comcast,Bright House, andTime Warnerannounced a pooling of an average of 120 MHz of spectrum and merged withClearwireto market the service. The new company hoped to benefit from combined services offerings and network resources as a springboard past its competitors. The cable companies were expected to provide media services to other partners while gaining access to the wireless network as aMobile virtual network operatorto provide triple-play services. Some wireless industry analysts, such as Ken Dulaney and Todd Kort at Gartner, were skeptical how the deal would work out: Although fixed-mobile convergence had been a recognized factor in the industry, prior attempts to form partnerships among wireless and cable companies had generally failed to lead to significant benefits for the participants. Other analysts at IDC favored the deal, pointing out that as wireless progresses to higher bandwidth, it inevitably competes more directly with cable, DSL and fiber, inspiring competitors into collaboration. Also, as wireless broadband networks grow denser and usage habits shift, the need for increased backhaul and media services accelerate, therefore the opportunity to leverage high bandwidth assets was expected to increase. The Aeronautical Mobile Airport Communication System (AeroMACS) is a wireless broadband network for the airport surface intended to link the control tower, aircraft, and fixed assets. In 2007, AeroMACS obtained a worldwide frequency allocation in the 5 GHz aviation band. As of 2018, there were 25 AeroMACS deployments in 8 countries, with at least another 25 deployments planned.[11] IEEE 802.16REVd and IEEE 802.16e standards support bothtime-division duplexingandfrequency-division duplexingas well as a half duplex FDD, that allows for a low cost implementation. Devices that provide connectivity to a WiMAX network are known assubscriber stations(SS). Portable units include handsets (similar to cellularsmartphones); PC peripherals (PC Cards or USB dongles); and embedded devices in laptops, which are now available for Wi-Fi services. In addition, there is much emphasis by operators on consumer electronics devices such as Gaming consoles, MP3 players and similar devices. WiMAX is more similar to Wi-Fi than to other3Gcellular technologies. The WiMAX Forum website provides a list of certified devices. However, this is not a complete list of devices available as certified modules are embedded into laptops, MIDs (Mobile Internet devices), and other private labeled devices. WiMAX gateway devices are available as both indoor and outdoor versions from manufacturers includingVecima Networks,Alvarion,Airspan,ZyXEL,Huawei, andMotorola. Thelist of WiMAX networksand WiMAX Forum[12]provide more links to specific vendors, products and installations. Many of the WiMAX gateways that are offered by manufactures such as these are stand-alone self-install indoor units. Such devices typically sit near the customer's window with the best signal, and provide: Indoor gateways are convenient, but radio losses mean that the subscriber may need to be significantly closer to the WiMAX base station than with professionally installed external units. Outdoor units are roughly the size of a laptop PC, and their installation is comparable to the installation of a residentialsatellite dish. A higher-gaindirectional outdoor unit will generally result in greatly increased range and throughput but with the obvious loss of practical mobility of the unit. USBcan provide connectivity to a WiMAX network through adongle. Generally, these devices are connected to a notebook or net book computer. Dongles typically have omnidirectional antennas which are of lower gain compared to other devices. As such, these devices are best used in areas of good coverage. HTC announced the first WiMAX enabledmobile phone, theMax 4G, on November 12, 2008.[13]The device was only available to certain markets in Russia on theYotanetwork until 2010.[14] HTC andSprint Nextelreleased the second WiMAX enabled mobile phone, theHTC Evo 4G, March 23, 2010 at the CTIA conference in Las Vegas. The device, made available on June 4, 2010,[15]is capable of both EV-DO(3G) and WiMAX(pre-4G) as well as simultaneous data & voice sessions. Sprint Nextel announced at CES 2012 that it will no longer be offering devices using the WiMAX technology due to financial circumstances, instead, along with its network partnerClearwire, Sprint Nextel rolled out a 4G network having decided to shift and utilizeLTE4G technology instead. WiMAX is based uponIEEE802.16e-2005,[16]approved in December 2005. It is a supplement to the IEEE Std 802.16-2004,[17]and so the actual standard is 802.16-2004 as amended by 802.16e-2005. Thus, these specifications need to be considered together. IEEE 802.16e-2005 improves upon IEEE 802.16-2004 by: SOFDMA (used in 802.16e-2005) and OFDM256 (802.16d) are not compatible thus equipment will have to be replaced if an operator is to move to the later standard (e.g., Fixed WiMAX to Mobile WiMAX). The original version of the standard on which WiMAX is based (IEEE 802.16) specified a physical layer operating in the 10 to 66 GHz range. 802.16a, updated in 2004 to 802.16-2004, added specifications for the 2 to 11 GHz range. 802.16-2004 was updated by 802.16e-2005 in 2005 and uses scalableorthogonal frequency-division multiple access[18](SOFDMA), as opposed to the fixedorthogonal frequency-division multiplexing(OFDM) version with 256 sub-carriers (of which 200 are used) in 802.16d. More advanced versions, including 802.16e, also bring multiple antenna support throughMIMO. (SeeWiMAX MIMO) This brings potential benefits in terms of coverage, self installation, power consumption, frequency re-use and bandwidth efficiency. WiMax is the most energy-efficient pre-4G technique amongLTEandHSPA+.[19] The WiMAX MAC uses ascheduling algorithmfor which the subscriber station needs to compete only once for initial entry into the network. After network entry is allowed, the subscriber station is allocated an access slot by the base station. The time slot can enlarge and contract, but remains assigned to the subscriber station, which means that other subscribers cannot use it. In addition to being stable under overload and over-subscription, the scheduling algorithm can also be morebandwidthefficient. The scheduling algorithm also allows the base station to control QoS parameters by balancing the time-slot assignments among the application needs of the subscriber station. As a standard intended to satisfy needs of next-generation data networks (4G), WiMAX is distinguished by its dynamic burst algorithm modulation adaptive to the physical environment the RF signal travels through. Modulation is chosen to be more spectrally efficient (more bits perOFDM/SOFDMAsymbol). That is, when the bursts have a highsignal strengthand a highcarrier to noiseplus interference ratio (CINR), they can be more easily decoded usingdigital signal processing(DSP). In contrast, operating in less favorable environments for RF communication, the system automatically steps down to a more robust mode (burst profile) which means fewer bits per OFDM/SOFDMA symbol; with the advantage that power per bit is higher and therefore simpler accurate signal processing can be performed. Burst profiles are used inverse (algorithmically dynamic) to low signal attenuation; meaning throughput between clients and the base station is determined largely by distance. Maximum distance is achieved by the use of the most robust burst setting; that is, the profile with the largest MAC frame allocation trade-off requiring more symbols (a larger portion of the MAC frame) to be allocated in transmitting a given amount of data than if the client were closer to the base station. The client's MAC frame and their individual burst profiles are defined as well as the specific time allocation. However, even if this is done automatically then the practical deployment should avoid high interference and multipath environments. The reason for which is obviously that too much interference causes the network to function poorly and can also misrepresent the capability of the network. The system is complex to deploy as it is necessary to track not only the signal strength and CINR (as in systems likeGSM) but also how the available frequencies will be dynamically assigned (resulting in dynamic changes to the available bandwidth.) This could lead to cluttered frequencies with slow response times or lost frames. As a result, the system has to be initially designed in consensus with the base station product team to accurately project frequency use, interference, and general product functionality. The Asia-Pacific region has surpassed the North American region in terms of 4G broadband wireless subscribers. There were around 1.7 million pre-WiMAX and WiMAX customers in Asia – 29% of the overall market – compared to 1.4 million in the US and Canada.[20] The WiMAX Forum has proposed an architecture that defines how a WiMAX network can be connected with an IP based core network, which is typically chosen by operators that serve as Internet Service Providers (ISP); Nevertheless, the WiMAX BS provide seamless integration capabilities with other types of architectures as with packet switched Mobile Networks. The WiMAX forum proposal defines a number of components, plus some of the interconnections (or reference points) between these, labeled R1 to R5 and R8: The functional architecture can be designed into various hardware configurations rather than fixed configurations. For example, the architecture is flexible enough to allow remote/mobile stations of varying scale and functionality and Base Stations of varying size – e.g. femto, pico, and mini BS as well as macros. WiMAX 2.1 and above can be integrated with a LTE TDD network and perform handovers from/to LTE TDD.[22]WiMAX 3 expands the integration to5G NR.[23] There is no uniform global licensed spectrum for WiMAX, however the WiMAX Forum published three licensed spectrum profiles: 2.3 GHz, 2.5 GHz and 3.5 GHz, in an effort to drive standardisation and decrease cost. In the US, the biggest segment available was around 2.5 GHz,[24]and is already assigned, primarily toSprint NextelandClearwire. Elsewhere in the world, the most-likely bands used will be the Forum approved ones, with 2.3 GHz probably being most important in Asia. Some countries in Asia likeIndiaandIndonesiawill use a mix of 2.5 GHz, 3.3 GHz and other frequencies.Pakistan'sWateen Telecomuses 3.5 GHz. Analog TV bands (700 MHz) may become available, but await the completedigital television transition, and other uses have been suggested for that spectrum. In the USA the FCCauction for this spectrumbegan in January 2008 and, as a result, the biggest share of the spectrum went to Verizon Wireless and the next biggest to AT&T.[25]Both of these companies stated their intention of supportingLTE, a technology which competes directly with WiMAX. EU commissionerViviane Redinghas suggested re-allocation of 500–800 MHz spectrum for wireless communication, including WiMAX.[26] WiMAX profiles define channel size,TDD/FDDand other necessary attributes in order to have interoperating products. The current fixed profiles are defined for both TDD and FDD profiles. At this point, all of the mobile profiles are TDD only. The fixed profiles have channel sizes of 3.5 MHz, 5 MHz, 7 MHz and 10 MHz. The mobile profiles are 5 MHz, 8.75 MHz and 10 MHz. (Note: the 802.16 standard allows a far wider variety of channels, but only the above subsets are supported as WiMAX profiles.) Since October 2007, the Radio communication Sector of the International Telecommunication Union (ITU-R) has decided to include WiMAX technology in the IMT-2000 set of standards.[27]This enables spectrum owners (specifically in the 2.5–2.69 GHz band at this stage) to use WiMAX equipment in any country that recognizes the IMT-2000. WiMAX cannot deliver 70Mbit/sover 50 km (31 mi). Like all wireless technologies, WiMAX can operate at higher bitrates or over longer distances but not both. Operating at the maximum range of 50 km (31 mi) increasesbit error rateand thus results in a much lower bitrate. Conversely, reducing the range (to under 1 km) allows a device to operate at higher bitrates. A citywide deployment of WiMAX inPerth,Australiademonstrated that customers at the cell-edge with an indoorCustomer-premises equipment(CPE) typically obtain speeds of around 1–4 Mbit/s, with users closer to the cell site obtaining speeds of up to 30 Mbit/s.[citation needed] Like all wireless systems, available bandwidth is shared between users in a given radio sector, so performance could deteriorate in the case of many active users in a single sector. However, with adequate capacity planning and the use of WiMAX's QoS, a minimum guaranteed throughput for each subscriber can be put in place. In practice, most users will have a range of 4–8 Mbit/s services and additional radio cards will be added to the base station to increase the number of users that may be served as required. A number of specialized companies produced baseband ICs and integrated RFICs for WiMAX Subscriber Stations in the 2.3, 2.5 and 3.5 GHz bands (refer to 'Spectrum allocation' above). These companies include, but are not limited to, Beceem,Sequans, andPicoChip. Comparisons and confusion between WiMAX andWi-Fiare frequent, because both are related to wireless connectivity and Internet access.[28] Although Wi-Fi and WiMAX are designed for different situations, they are complementary. WiMAX network operators typically provide a WiMAX Subscriber Unit that connects to the metropolitan WiMAX network and provides Wi-Fi connectivity within the home or business for computers and smartphones. This enables the user to place the WiMAX Subscriber Unit in the best reception area, such as a window, and have date access throughout their property. TTCN-3test specification language is used for the purposes of specifying conformance tests for WiMAX implementations. The WiMAX test suite is being developed by a Specialist Task Force atETSI(STF 252).[29] The WiMAX Forum is a non profit organization formed to promote the adoption of WiMAX compatible products and services.[30] A major role for the organization is to certify the interoperability of WiMAX products.[31]Those that pass conformance and interoperability testing achieve the "WiMAX Forum Certified" designation, and can display this mark on their products and marketing materials. Some vendors claim that their equipment is "WiMAX-ready", "WiMAX-compliant", or "pre-WiMAX", if they are not officially WiMAX Forum Certified. Another role of the WiMAX Forum is to promote the spread of knowledge about WiMAX. In order to do so, it has a certified training program that is currently offered in English and French. It also offers a series of member events and endorses some industry events. WiSOA was the first global organization composed exclusively of owners of WiMAX spectrum with plans to deploy WiMAX technology in those bands. WiSOA focused on the regulation, commercialisation, and deployment of WiMAX spectrum in the 2.3–2.5 GHz and the 3.4–3.5 GHz ranges. WiSOA merged with theWireless Broadband Alliancein April 2008.[32] In 2011, theTelecommunications Industry Associationreleased three technical standards (TIA-1164, TIA-1143, and TIA-1140) that cover the air interface and core networking aspects of Wi-MaxHigh-Rate Packet Data(HRPD) systems using a Mobile Station/Access Terminal (MS/AT) with a single transmitter.[33] Within the marketplace, WiMAX's main competition came from existing, widely deployed wireless systems such asUniversal Mobile Telecommunications System(UMTS),CDMA2000, existing Wi-Fi, mesh networking and eventually 4G (LTE). In the future, competition will be from the evolution of the major cellular standards to4G,[needs update]high-bandwidth, low-latency, all-IP networks with voice services built on top. The worldwide move to 4G for GSM/UMTS andAMPS/TIA(including CDMA2000) is the3GPP Long Term Evolution(LTE) effort. The LTE Standard was finalized in December 2008, with the first commercial deployment of LTE carried out by TeliaSonera in Oslo and Stockholm in December, 2009. Henceforth, LTE saw rapidly increasing adoption by mobile carriers around the world. Although WiMax was much earlier to market than LTE, LTE was an upgrade and extension of previous 3G (GSM and CDMA) standards, whereas WiMax was a relatively new and different technology without a large user base. Ultimately, LTE won the war to become the 4G standard because mobile operators such as Verizon, AT&T, Vodafone, NTT, and Deutsche Telekom chose to extend their investments in know-how, equipment and spectrum from 3G to LTE, rather than adopt a new technology standard. It would never have been cost-effective for WiMax network operators to compete against fixed-line broadband networks based on 4G technologies. By 2009, most mobile operators began to realize that mobile connectivity (not fixed 802.16e) was the future, and that LTE was going to become the new worldwide mobile connectivity standard, so they chose to wait for LTE to develop rather than switch from 3G to WiMax. WiMax was a superior technology in terms of speed (roughly 25 Mbit/s) for a few years (2005-2009), and it pioneered some new technologies such as MIMO. But the mobile version of WiMax (802.16m), intended to compete with GSM and CDMA technologies, was too little/too late in getting established, and by the time the LTE standard was finalized in December 2008, the fate of WiMax as a mobile solution was doomed and it was clear that LTE (not WiMax) would become the world's new 4G standard. The largest wireless broadband partner using WiMax, Clearwire, announced in 2008 that they would begin overlaying their existing WiMax network with LTE technology, which was necessary for Clearwire to obtain investments they needed to stay in business. In some areas of the world, the wide availability of UMTS and a general desire for standardization meant spectrum was not allocated for WiMAX: in July 2005, theEU-wide frequency allocation for WiMAX was blocked.[citation needed] Early WirelessMAN standards, The European standardHiperMANand Korean standardWiBrowere harmonized as part of WiMAX and are no longer seen as competition but as complementary.[citation needed]All networks now being deployed in South Korea, the home of the WiBro standard, are now WiMAX.[citation needed] The IEEE 802.16m-2011 standard[34]was the core technology for WiMAX 2. The IEEE 802.16m standard was submitted to the ITU forIMT-Advancedstandardization.[35]IEEE 802.16m is one of the major candidates for IMT-Advanced technologies by ITU. Among many enhancements, IEEE 802.16m systems can provide four times faster[clarification needed]data speed than the WiMAX Release 1. WiMAX Release 2 provided backward compatibility with Release 1. WiMAX operators could migrate from release 1 to release 2 by upgrading channel cards or software. The WiMAX 2 Collaboration Initiative was formed to help this transition.[36] It was anticipated that using 4X2MIMOin the urban microcell scenario with only a single 20 MHzTDDchannel available system wide, the 802.16m system can support both 120 Mbit/s downlink and 60 Mbit/s uplink per site simultaneously. It was expected that the WiMAX Release 2 would be available commercially in the 2011–2012 timeframe.[37] WiMAX Release 2.1 was released in early-2010s which broke compatibility with earlier WiMAX networks.[citation needed]Significant number of operators have migrated to the new standard that is compatible with TD-LTE by the end of 2010s. A field test conducted in 2007 by SUIRG (Satellite Users Interference Reduction Group) with support from the U.S. Navy, the Global VSAT Forum, and several member organizations yielded results showing interference at 12 km when using the same channels for both the WiMAX systems and satellites inC-band.[38] As of October 2010, the WiMAX Forum claimed over 592 WiMAX (fixed and mobile) networks deployed in over 148 countries, covering over 621 million people.[39]By February 2011, the WiMAX Forum cited coverage of over 823 million people, and estimated coverage to over 1 billion people by the end of the year. Note that coverage means the offer of availability of WiMAX service to populations within various geographies, not the number of WiMAX subscribers.[40] South Korea launched a WiMAX network in the second quarter of 2006. Spain delivered full coverage in two cities Seville and Málaga in 2008 reaching 20,000 portable units. By the end of 2008 there were 350,000 WiMAX subscribers in Korea.[41] Worldwide, by early 2010 WiMAX seemed to be ramping quickly relative to other available technologies, though access in North America lagged.[42]Yota, the largest WiMAX network operator in the world in 4Q 2009,[43][44]announced in May 2010 that it would move new network deployments to LTE and, subsequently, change its existing networks as well.[citation needed] A study published in September 2010 by Blycroft Publishing estimated 800 management contracts from 364 WiMAX operations worldwide offering active services (launched or still trading as opposed to just licensed and still to launch).[45]The WiMAX Forum announced on Aug 16, 2011 that there were more than 20 million WiMAX subscribers worldwide, the high-water mark for this technology.http://wimaxforum.org/Page/News/PR/20110816_WiMAX_Subscriptions_Surpass_20_Million_Globally
https://en.wikipedia.org/wiki/WiMAX-Advanced
Incontrol theoryaself-tuningsystem is capable of optimizing its own internal running parameters in order to maximize or minimize the fulfilment of anobjective function; typically the maximization ofefficiencyorerrorminimization. Self-tuning and auto-tuning often refer to the same concept. Many software research groups consider auto-tuning the proper nomenclature. Self-tuning systems typically exhibitnon-linearadaptive control. Self-tuning systems have been a hallmark of the aerospace industry for decades, as this sort of feedback is necessary to generateoptimal multi-variable controlfor non-linear processes. In the telecommunications industry,adaptive communicationsare often used to dynamically modify operational system parameters to maximize efficiency and robustness. Examples of self-tuning systems in computing include: Performance benefits can be substantial. ProfessorJack Dongarra, an American computer scientist, claims self-tuning boosts performance, often on the order of 300%.[1] Digital self-tuning controllers are an example of self-tuning systems at the hardware level. Self-tuning systems are typically composed of four components: expectations, measurement, analysis, and actions. The expectations describe how the system should behave given exogenous conditions. Measurements gather data about the conditions and behaviour. Analysis helps determine whether the expectations are being met- and which subsequent actions should be performed. Common actions are gathering more data and performing dynamic reconfiguration of the system. Self-tuning (self-adapting) systems of automatic control are systems whereby adaptation to randomly changing conditions is performed by means of automatically changing parameters or via automatically determining their optimum configuration.[2]In any non-self-tuning automatic control system there are parameters which have an influence on system stability and control quality and which can be tuned. If these parameters remain constant whilst operating conditions (such as input signals or different characteristics of controlled objects) are substantially varying, control can degrade or even become unstable. Manual tuning is often cumbersome and sometimes impossible. In such cases, not only is using self-tuning systems technically and economically worthwhile, but it could be the only means of robust control. Self-tuning systems can be with or without parameter determination. In systems with parameter determination the required level of control quality is achieved by automatically searching for an optimum (in some sense) set of parameter values. Control quality is described by a generalised characteristic which is usually a complex and not completely known or stable function of the primary parameters. This characteristic is either measured directly or computed based on the primary parameter values. The parameters are then tentatively varied. An analysis of the control quality characteristic oscillations caused by the varying of the parameters makes it possible to figure out if the parameters have optimum values, i.e.. if those values deliver extreme (minimum or maximum) values of the control quality characteristic. If the characteristic values deviate from an extremum, the parameters need to be varied until optimum values are found. Self-tuning systems with parameter determination can reliably operate in environments characterised by wide variations of exogenous conditions. In practice systems with parameter determination require considerable time to find an optimum tuning, i.e. time necessary for self-tuning in such systems is bounded from below. Self-tuning systems without parameter determination do not have this disadvantage. In such systems, some characteristic of control quality is used (e.g., the first time derivative of a controlled parameter). Automatic tuning makes sure that this characteristic is kept within given bounds. Different self-tuning systems without parameter determination exist that are based on controlling transitional processes, frequency characteristics, etc. All of those are examples of closed-circuit self-tuning systems, whereby parameters are automatically corrected every time the quality characteristic value falls outside the allowable bounds. In contrast, open-circuit self-tuning systems are systems with para-metrical compensation, whereby input signal itself is controlled and system parameters are changed according to a specified procedure. This type of self-tuning can be close to instantaneous. However, in order to realise such self-tuning one needs to control the environment in which the system operates and a good enough understanding of how the environment influences the controlled system is required. In practice self-tuning is done through the use of specialised hardware or adaptive software algorithms. Giving software the ability to self-tune (adapt):
https://en.wikipedia.org/wiki/Self-tuning
Inmathematicsandcomputer science, acanonical,normal, orstandardformof amathematical objectis a standard way of presenting that object as amathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies auniquerepresentation for every object, while a normal form simply specifies its form, without the requirement of uniqueness.[1] The canonical form of apositive integerindecimal representationis a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which anequivalence relationis defined, a canonical form consists in the choice of a specific object in each class. For example: In computer science, and more specifically incomputer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation (withcanonicalizationbeing the process through which a representation is put into its canonical form).[2]Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra,normal formis a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form. Canonical form can also mean adifferential formthat is defined in a natural (canonical) way. Given a setSof objects with anequivalence relationR on S, a canonical form is given by designating some objects ofSto be "in canonical form", such that every object under consideration is equivalent to exactly one object in canonical form. In other words, the canonical forms inSrepresent the equivalence classes, once and only once. To test whether two objects are equivalent, it then suffices to test equality on their canonical forms. A canonical form thus provides aclassification theoremand more, in that it not only classifies every class, but also gives a distinguished (canonical)representativefor each object in the class. Formally, a canonicalization with respect to an equivalence relationRon a setSis a mappingc:S→Ssuch that for alls,s1,s2∈S: Property 3 is redundant; it follows by applying 2 to 1. In practical terms, it is often advantageous to be able to recognize the canonical forms. There is also a practical, algorithmic question to consider: how to pass from a given objectsinSto its canonical forms*? Canonical forms are generally used to make operating with equivalence classes more effective. For example, inmodular arithmetic, the canonical form for a residue class is usually taken as the least non-negative integer in it. Operations on classes are carried out by combining these representatives, and then reducing the result to its least non-negative residue. The uniqueness requirement is sometimes relaxed, allowing the forms to be unique up to some finer equivalence relation, such as allowing for reordering of terms (if there is no natural ordering on terms). A canonical form may simply be a convention, or a deep theorem. For example, polynomials are conventionally written with the terms in descending powers: it is more usual to writex2+x+ 30 thanx+ 30 +x2, although the two forms define the same polynomial. By contrast, the existence ofJordan canonical formfor a matrix is a deep theorem. According toOEDandLSJ, the termcanonicalstems from theAncient Greekwordkanonikós(κανονικός, "regular, according to rule") fromkanṓn(κᾰνών, "rod, rule"). The sense ofnorm,standard, orarchetypehas been used in many disciplines. Mathematical usage is attested in a 1738 letter fromLogan.[3]The German termkanonische Formis attested in a 1846 paper byEisenstein,[4]later the same yearRichelotuses the termNormalformin a paper,[5]and in 1851Sylvesterwrites:[6] "I now proceed to [...] the mode of reducing Algebraical Functions to their simplest and most symmetrical, or as my admirable friendM. Hermitewell proposes to call them, theirCanonical forms." In the same period, usage is attested byHesse("Normalform"),[7]Hermite("forme canonique"),[8]Borchardt("forme canonique"),[9]andCayley("canonical form").[10] In 1865, theDictionary of Science, Literature and Artdefines canonical form as: "In Mathematics, denotes a form, usually the simplest or most symmetrical, to which, without loss of generality, all functions of the same class can be reduced." Note: in this section, "up to" some equivalence relation E means that the canonical form is not unique in general, but that if one object has two different canonical forms, they are E-equivalent. Standard form is used by many mathematicians and scientists to write extremelylarge numbersin a more concise and understandable way, the most prominent of which being thescientific notation.[11] Inanalytic geometry: By contrast, there are alternative forms for writing equations. For example, the equation of a line may be written as alinear equationinpoint-slopeandslope-intercept form. Convex polyhedracan be put intocanonical formsuch that: Every differentiablemanifoldhas acotangent bundle. That bundle can always be endowed with a certaindifferential form, called thecanonical one-form. This form gives the cotangent bundle the structure of asymplectic manifold, and allows vector fields on the manifold to be integrated by means of theEuler-Lagrange equations, or by means ofHamiltonian mechanics. Such systems of integrabledifferential equationsare calledintegrable systems. The study ofdynamical systemsoverlaps with that ofintegrable systems; there one has the idea of anormal form (dynamical systems). In the study of manifolds in three dimensions, one has thefirst fundamental form, thesecond fundamental formand thethird fundamental form. The symbolic manipulation of a formula from one form to another is called a "rewriting" of that formula. One can study the abstract properties of rewriting generic formulas, by studying the collection of rules by which formulas can be validly manipulated. These are the "rewriting rules"—an integral part of anabstract rewriting system. A common question is whether it is possible to bring some generic expression to a single, common form, the normal form. If different sequences of rewrites still result in the same form, then that form can be termed a normal form, with the rewrite being called a confluent. It is not always possible to obtain a normal form. Ingraph theory, a branch of mathematics, graph canonization is the problem of finding a canonical form of a given graphG. A canonical form is alabeled graphCanon(G) that isisomorphictoG, such that every graph that is isomorphic toGhas the same canonical form asG. Thus, from a solution to the graph canonization problem, one could also solve the problem ofgraph isomorphism: to test whether two graphsGandHare isomorphic, compute their canonical forms Canon(G) and Canon(H), and test whether these two canonical forms are identical. Incomputing, the reduction of data to any kind of canonical form is commonly calleddata normalization. For instance,database normalizationis the process of organizing thefieldsandtablesof arelational databaseto minimizeredundancyand dependency.[13] In the field ofsoftware security, a commonvulnerabilityis unchecked malicious input (seeCode injection). The mitigation for this problem is properinput validation. Before input validation is performed, the input is usually normalized by eliminating encoding (e.g.,HTML encoding) and reducing the input data to a single commoncharacter set. Other forms of data, typically associated withsignal processing(includingaudioandimaging) ormachine learning, can be normalized in order to provide a limited range of values. Incontent management, the concept of asingle source of truth(SSOT) is applicable, just as it is indatabase normalizationgenerally and insoftware development. Competentcontent management systemsprovide logical ways of obtaining it, such astransclusion.
https://en.wikipedia.org/wiki/Canonical_form
Innatural language processing,semantic role labeling(also calledshallow semantic parsingorslot-filling) is the process that assigns labels to words or phrases in a sentence that indicates theirsemantic rolein the sentence, such as that of anagent, goal, or result. It serves to find the meaning of the sentence. To do this, it detects the arguments associated with thepredicateorverbof asentenceand how they are classified into their specificroles. A common example is the sentence "Mary sold the book to John." The agent is "Mary," the predicate is "sold" (or rather, "to sell,") the theme is "the book," and the recipient is "John." Another example is how "the book belongs to me" would need two labels such as "possessed" and "possessor" and "the book was sold to John" would need two other labels such as theme and recipient, despite these two clauses being similar to "subject" and "object" functions.[1] In 1968, the first idea for semantic role labeling was proposed byCharles J. Fillmore.[2]His proposal led to theFrameNetproject which produced the first major computational lexicon that systematically described many predicates and their corresponding roles. Daniel Gildea (Currently atUniversity of Rochester, previouslyUniversity of California, Berkeley/International Computer Science Institute) andDaniel Jurafsky(currently teaching atStanford University, but previously working atUniversity of ColoradoandUC Berkeley) developed the first automatic semantic role labeling system based on FrameNet. ThePropBankcorpus added manually created semantic role annotations to thePenn Treebankcorpus ofWall Street Journaltexts. Many automatic semantic role labeling systems have used PropBank as a training dataset to learn how to annotate new sentences automatically.[3] Semantic role labeling is mostly used for machines to understand the roles of words within sentences.[4]This benefits applications similar toNatural Language Processingprograms that need to understand not just the words of languages, but how they can be used in varying sentences.[5]A better understanding of semantic role labeling could lead to advancements inquestion answering,information extraction,automatic text summarization,text data mining, andspeech recognition.[6]
https://en.wikipedia.org/wiki/Semantic_role_labeling
Time-hopping(TH) is a communications signal technique which can be used to achieveanti-jamming(AJ) orlow probability of intercept(LPI). It can also refer topulse-position modulation, which in its simplest form employs 2kdiscrete pulses (referring to the unique positions of the pulse within the transmission window) to transmit k bit(s) per pulse. To achieve LPI, the transmission time is changed randomly by varying the period and duty cycle of the pulse (carrier) using a pseudo-random sequence. The transmitted signal will then have intermittent start and stop times. Although often used to form hybridspread-spectrum(SS) systems, TH is strictly speaking a non-SS technique. Spreading of the spectrum is caused by other factors associated with TH, such as using pulses with low duty cycle having a wide frequency response. An example of hybrid SS is TH-FHSS or hybrid TDMA (time division multiple access). This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Time-hopping_spread_spectrum
Incomputer science, aparallel random-access machine(parallel RAMorPRAM) is ashared-memoryabstract machine. As its name indicates, the PRAM is intended as the parallel-computing analogy to therandom-access machine(RAM) (not to be confused withrandom-access memory).[1]In the same way that the RAM is used by sequential-algorithm designers to model algorithmic performance (such as time complexity), the PRAM is used by parallel-algorithm designers to model parallel algorithmic performance (such as time complexity, where the number of processors assumed is typically also stated). Similar to the way in which the RAM model neglects practical issues, such as access time to cache memory versus main memory, the PRAM model neglects such issues assynchronizationandcommunication, but provides any (problem-size-dependent) number of processors. Algorithm cost, for instance, is estimated using two parameters O(time) and O(time × processor_number). Read/write conflicts, commonly termed interlocking in accessing the same shared memory location simultaneously are resolved by one of the following strategies: Here, E and C stand for 'exclusive' and 'concurrent' respectively. The read causes no discrepancies while the concurrent write is further defined as: Several simplifying assumptions are made while considering the development of algorithms for PRAM. They are: These kinds of algorithms are useful for understanding the exploitation of concurrency, dividing the original problem into similar sub-problems and solving them in parallel. The introduction of the formal 'P-RAM' model in Wyllie's 1979 thesis[4]had the aim of quantifying analysis of parallel algorithms in a way analogous to theTuring Machine. The analysis focused on a MIMD model of programming using a CREW model but showed that many variants, including implementing a CRCW model and implementing on an SIMD machine, were possible with only constant overhead. PRAM algorithms cannot be parallelized with the combination ofCPUanddynamic random-access memory(DRAM) because DRAM does not allow concurrent access to a single bank (not even different addresses in the bank); but they can be implemented in hardware or read/write to the internalstatic random-access memory(SRAM) blocks of afield-programmable gate array(FPGA), it can be done using a CRCW algorithm. However, the test for practical relevance of PRAM (or RAM) algorithms depends on whether their cost model provides an effective abstraction of some computer; the structure of that computer can be quite different than the abstract model. The knowledge of the layers of software and hardware that need to be inserted is beyond the scope of this article. But, articles such asVishkin (2011)demonstrate how a PRAM-like abstraction can be supported by theexplicit multi-threading(XMT) paradigm and articles such asCaragea & Vishkin (2011)demonstrate that a PRAM algorithm for themaximum flow problemcan provide strong speedups relative to the fastest serial program for the same problem. The articleGhanim, Vishkin & Barua (2018)demonstrated that PRAM algorithms as-is can achieve competitive performance even without any additional effort to cast them as multi-threaded programs on XMT. This is an example ofSystemVerilogcode which finds the maximum value in the array in only 2 clock cycles. It compares all the combinations of the elements in the array at the first clock, and merges the result at the second clock. It uses CRCW memory;m[i] <= 1andmaxNo <= data[i]are written concurrently. The concurrency causes no conflicts because the algorithm guarantees that the same value is written to the same memory. This code can be run onFPGAhardware.
https://en.wikipedia.org/wiki/Parallel_random_access_machine
Thebomba, orbomba kryptologiczna(Polish for "bomb" or "cryptologic bomb"), was a special-purpose machine designed around October 1938 byPolish Cipher BureaucryptologistMarian Rejewskito break GermanEnigma-machineciphers. How the machine came to be called a "bomb" has been an object of fascination and speculation. One theory, most likely apocryphal, originated with Polish engineer and army officer Tadeusz Lisicki (who knew Rejewski and his colleagueHenryk Zygalskiin wartime Britain but was never associated with theCipher Bureau). He claimed thatJerzy Różycki(the youngest of the three Enigma cryptologists, and who had died in a Mediterranean passenger-ship sinking in January 1942) named the "bomb" after anice-cream dessertof that name. This story seems implausible, since Lisicki had not known Różycki. Rejewski himself stated that the device had been dubbed a "bomb" "for lack of a better idea".[1] Perhaps the most credible explanation is given by a Cipher Bureau technician, Czesław Betlewski: workers at B.S.-4, the Cipher Bureau's German section, christened the machine a "bomb" (also, alternatively, a "washing machine" or a "mangle") because of the characteristic muffled noise that it produced when operating.[2] A top-secret U.S. Army report dated 15 June 1945 stated: A machine called the "bombe" is used to expedite the solution. The firstmachinewas built by the Poles and was a hand operated multiple enigma machine. When a possible solution was reached a part would fall off the machine onto the floor with a loud noise. Hence the name "bombe".[3] The U.S. Army's above description of the Polishbombais both vague and inaccurate, as is clear from the device's description at the end of the second paragraph of the "History" section, below: "Each bomb... essentially constituted anelectrically poweredaggregate ofsixEnigmas..." Determination of a solution involved no disassembly ("a part... fall[ing] off") of the device. The German Enigma used a combinationkeyto control the operation of the machine: rotor order, which rotors to install, which ring setting for each rotor, which initial setting for each rotor, and the settings of thesteckerplugboard. The rotor settings were trigrams (for example, "NJR") to indicate the way the operator was to set the machine. German Enigma operators were issued lists of these keys, one key for each day. For added security, however, each individual message was encrypted using an additional key modification. The operator randomly selected a trigram rotor setting for eachmessage(for example, "PDN"). This message key would be typed twice ("PDNPDN") andencrypted, using the daily key (all the rest of those settings). At this point each operator would reset his machine to the message key, which would then be used for the rest of the message. Because the configuration of the Enigma's rotor set changed with each depression of a key, the repetition would not be obvious in theciphertextsince the sameplaintextletters would encrypt to different ciphertext letters. (For example, "PDNPDN" might become "ZRSJVL.") This procedure, which seemed reasonably secure to the Germans, was nonetheless acryptographicmalpractice, since the first insights into Enigma encryption could be inferred from seeing how the same character string was encrypted differently two times in a row. Using the knowledge that the first three letters of a message were the same as the second three, Polish mathematician–cryptologistMarian Rejewskiwas able to determine the internal wiring of the Enigma machine and thus to reconstruct the logical structure of the device. Only general traits of the machine were suspected, from the example of the commercial Enigma variant, which the Germans were known to have been using for diplomatic communications. The military versions were sufficiently different to present an entirely new problem. Having done that much, it was still necessary to check each of the potential daily keys to break an encrypted message (i.e., a "ciphertext"). With many thousands of such possible keys, and with the growing complexity of the Enigma machine and its keying procedures, this was becoming an increasingly daunting task. In order to mechanize and speed up the process, Rejewski, a civilian mathematician working at the Polish General Staff's Cipher Bureau inWarsaw, invented the"bomba kryptologiczna"(cryptologic bomb), probably in October 1938. Each bomb (six were built in Warsaw for the Cipher Bureau before September 1939) essentially constituted an electrically powered aggregate of six Enigmas and took the place of some one hundred workers.[4] The bomb method was based, like the Poles' earlier"grill" method, on the fact that the plug connections in the commutator ("plugboard") did not change all the letters. But while the grill method required unchangedpairsof letters, the bomb method required only unchanged letters. Hence it could be applied even though the number of plug connections in this period was between five and eight. In mid-November 1938, the bombs were ready, and the reconstructing of daily keys now took about two hours.[5] Up to July 25, 1939, the Poles had been breaking Enigma messages for over six and a half years without telling theirFrenchandBritishallies. On December 15, 1938, two new rotors, IV and V, were introduced (three of the now five rotors being selected for use in the machine at a time). As Rejewski wrote in a 1979 critique of appendix 1, volume 1 (1979), of the official history ofBritish Intelligence in the Second World War, "we quickly found the [wirings] within the [new rotors], but [their] introduction [...] raised the number of possible sequences of drums from 6 to 60 [...] and hence also raised tenfold the work of finding the keys. Thus the change was not qualitative but quantitative. We would have had to markedly increase the personnel to operate the bombs, to produce theperforated sheets(60 series of 26 sheets each were now needed, whereas up to the meeting on July 25, 1939, we had only two such series ready) and to manipulate the sheets."[6] Harry Hinsleysuggested inBritish Intelligence in the Second World Warthat the Poles decided to share their Enigma-breaking techniques and equipment with the French and British in July 1939 because they had encountered insuperable technical difficulties. Rejewski rejected this: "No, it was not [cryptologic] difficulties [...] that prompted us to work with the British and French, but only the deteriorating political situation. If we had had no difficulties at all we would still, or even the more so, have shared our achievements with our allies asour contribution to the struggle against Germany."[6]
https://en.wikipedia.org/wiki/Bomba_(cryptography)
Inmathematics, for asequenceof complex numbersa1,a2,a3, ... theinfinite product is defined to be thelimitof thepartial productsa1a2...anasnincreases without bound. The product is said toconvergewhen the limit exists and is not zero. Otherwise the product is said todiverge. A limit of zero is treated specially in order to obtain results analogous to those forinfinite sums. Some sources allow convergence to 0 if there are only a finite number of zero factors and the product of the non-zero factors is non-zero, but for simplicity we will not allow that here. If the product converges, then the limit of the sequenceanasnincreases without bound must be 1, while the converse is in general not true. The best known examples of infinite products are probably some of the formulae forπ, such as the following two products, respectively byViète(Viète's formula, the first published infinite product in mathematics) andJohn Wallis(Wallis product): The product of positive real numbers converges to a nonzero real number if and only if the sum converges. This allows the translation of convergence criteria for infinite sums into convergence criteria for infinite products. The same criterion applies to products of arbitrary complex numbers (including negative reals) if the logarithm is understood as a fixedbranch of logarithmwhich satisfiesln⁡(1)=0{\displaystyle \ln(1)=0}, with the provision that the infinite product diverges when infinitely manyanfall outside the domain ofln{\displaystyle \ln }, whereas finitely many suchancan be ignored in the sum. If we definean=1+pn{\displaystyle a_{n}=1+p_{n}}, the bounds show that the infinite product ofanconverges if the infinite sum of thepnconverges. This relies on theMonotone convergence theorem. We can show the converse by observing that, ifpn→0{\displaystyle p_{n}\to 0}, then and by thelimit comparison testit follows that the two series are equivalent meaning that either they both converge or they both diverge. If the series∑n=1∞log⁡(an){\textstyle \sum _{n=1}^{\infty }\log(a_{n})}diverges to−∞{\displaystyle -\infty }, then the sequence of partial products of theanconverges to zero. The infinite product is said todiverge to zero.[1] For the case where thepn{\displaystyle p_{n}}have arbitrary signs, the convergence of the sum∑n=1∞pn{\textstyle \sum _{n=1}^{\infty }p_{n}}does not guarantee the convergence of the product∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}. For example, ifpn=(−1)n+1n{\displaystyle p_{n}={\frac {(-1)^{n+1}}{\sqrt {n}}}}, then∑n=1∞pn{\textstyle \sum _{n=1}^{\infty }p_{n}}converges, but∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}diverges to zero. However, if∑n=1∞|pn|{\textstyle \sum _{n=1}^{\infty }|p_{n}|}is convergent, then the product∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}convergesabsolutely–that is, the factors may be rearranged in any order without altering either the convergence, or the limiting value, of the infinite product.[2]Also, if∑n=1∞|pn|2{\textstyle \sum _{n=1}^{\infty }|p_{n}|^{2}}is convergent, then the sum∑n=1∞pn{\textstyle \sum _{n=1}^{\infty }p_{n}}and the product∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}are either both convergent, or both divergent.[3] One important result concerning infinite products is that everyentire functionf(z) (that is, every function that isholomorphicover the entirecomplex plane) can be factored into an infinite product of entire functions, each with at most a single root. In general, iffhas a root of ordermat the origin and has other complex roots atu1,u2,u3, ... (listed with multiplicities equal to their orders), then whereλnare non-negative integers that can be chosen to make the product converge, andϕ(z){\displaystyle \phi (z)}is some entire function (which means the term before the product will have no roots in the complex plane). The above factorization is not unique, since it depends on the choice of values forλn. However, for most functions, there will be some minimum non-negative integerpsuch thatλn=pgives a convergent product, called thecanonical product representation. Thispis called therankof the canonical product. In the event thatp= 0, this takes the form This can be regarded as a generalization of thefundamental theorem of algebra, since for polynomials, the product becomes finite andϕ(z){\displaystyle \phi (z)}is constant. In addition to these examples, the following representations are of special note: The last of these is not a product representation of the same sort discussed above, asζis not entire. Rather, the above product representation ofζ(z) converges precisely for Re(z) > 1, where it is an analytic function. By techniques ofanalytic continuation, this function can be extended uniquely to an analytic function (still denotedζ(z)) on the whole complex plane except at the pointz= 1, where it has a simplepole.
https://en.wikipedia.org/wiki/Infinite_product
SipHashis anadd–rotate–xor(ARX) based family ofpseudorandom functionscreated by Jean-Philippe Aumasson andDaniel J. Bernsteinin 2012,[1]: 165[2]in response to a spate of"hash flooding"denial-of-service attacks(HashDoS) in late 2011.[3] SipHash is designed as asecure pseudorandom functionand can also be used as a securemessage authentication code(MAC). SipHash, however, is not a general purpose key-lesshash functionsuch asSecure Hash Algorithms(SHA) and therefore must always be used with a secret key in order to be secure. That is, SHA is designed so that it is difficult for an attacker to find two messagesXandYsuch that SHA(X) = SHA(Y), even though anyone may compute SHA(X). SipHash instead guarantees that, having seenXiand SipHash(Xi,k), an attacker who does not know the keykcannot find (any information about)kor SipHash(Y,k) for any messageY∉ {Xi} which they have not seen before. SipHash computes a64-bitmessage authentication codefrom a variable-length message and 128-bit secret key. It was designed to be efficient even for short inputs, with performance comparable to non-cryptographic hash functions, such asCityHash;[4]: 496[2]this can be used to prevent denial-of-service attacks againsthash tables("hash flooding"),[5]or to authenticatenetwork packets. A variant was later added which produces a 128-bit result.[6] An unkeyed hash function such as SHA is collision-resistant only if the entire output is used. If used to generate asmalloutput, such as an index into a hash table of practical size, then no algorithm can prevent collisions; an attacker need only make as many attempts as there are possible outputs. For example, suppose a network server is designed to be able to handle up to a million requests at once. It keeps track of incoming requests in a hash table with two million entries, using a hash function to map identifying information from each request to one of the two million possible table entries. An attacker who knows the hash function need only feed it arbitrary inputs; one out of two million will have a specific hash value. If the attacker now sends a few hundred requests all chosen to have thesamehash value to the server, that will produce a large number of hash collisions, slowing (or possibly stopping) the server with an effect similar to apacket floodof many million requests.[7] By using a key unknown to the attacker, a keyed hash function like SipHash prevents this sort of attack. While it is possible to add a key to an unkeyed hash function (HMACis a popular technique), SipHash is much more efficient. Functions in SipHash family are specified as SipHash-c-d, wherecis the number of rounds per message block anddis the number of finalization rounds. The recommended parameters are SipHash-2-4 for best performance, and SipHash-4-8 for conservative security. A few languages use SipHash-1-3 for performance at the risk of yet-unknown DoS attacks.[8] Thereference implementationwas released aspublic domain softwareunder theCC0.[6] SipHash is used inhash tableimplementationsof various software:[9] The following programs use SipHash in other ways: Implementations
https://en.wikipedia.org/wiki/SipHash
Takis Fotopoulos(Greek:Τάκης Φωτόπουλος; born 14 October 1940) is aGreekpolitical philosopher, economist and writer who founded theInclusive Democracymovement, aiming at asynthesisofclassical democracywithlibertarian socialism[1]and the radical currents in thenew social movements. He is anacademic, and has written many books and over 900 articles. He is the editor ofThe International Journal of Inclusive Democracy(which succeededDemocracy & Nature) and is the author ofTowards An Inclusive Democracy(1997) in which the foundations of the Inclusive Democracy project were set.[2]His latest book isThe New World Order in Action: Volume 1: Globalization, the Brexit Revolution and the "Left"- Towards a Democratic Community of Sovereign Nations(December 2016). Fotopoulos is Greek and lives inLondon.[3] Fotopoulos was born on the Greek island ofChiosand his family moved toAthenssoon afterwards. After graduating from theUniversity of Athenswith degrees inEconomicsandPolitical Scienceand inLaw, he moved to London in 1966 for postgraduate study at theLondon School of Economicson a Varvaressos scholarship from Athens University. He was a studentsyndicalistand activist in Athens[a]and then a political activist in London, taking an active part in the1968 student proteststhere, and in organisations of the revolutionary Greek Left during the struggle against theGreek military junta of 1967–1974. During this period, he was a member of the Greek group calledRevolutionary Socialist Groupsin London, which published the newspaperΜαμή("Midwife", from the Marxian dictum, "violence is the midwife of revolution") for which he wrote several articles.[4]Fotopoulos married Sia Mamareli (a former lawyer) in 1966; the couple have a son, Costas (born in 1974), who is a composer and pianist. Fotopoulos was a Senior Lecturer in Economics at thePolytechnic of North Londonfrom 1969 to 1989, until he began editing the journalSociety & Nature, laterDemocracy & Natureand subsequently the onlineInternational Journal of Inclusive Democracy.[2][3]He was also a columnist ofEleftherotypia,[5]the second-biggest newspaper in Greece.[6] Fotopoulos developed thepolitical projectofInclusive Democracy(ID) in 1997 (an exposition can be found inTowards An Inclusive Democracy). The first issue ofSociety & Naturedeclared that: our ambition is to initiate an urgently needed dialogue on the crucial question of developing a new liberatory social project, at a moment inHistorywhen theLefthas abandoned this traditional role.[7] It specified that the new project should be seen as the outcome of a synthesis of the democratic, libertarian socialist andradical Greentraditions.[8]Since then, a dialogue has followed in the pages of the journal, in which supporters of theautonomyproject likeCornelius Castoriadis,social ecologysupporters including its founderMurray Bookchin, and Green activists and academics likeSteven Besthave taken part. The starting point for Fotopoulos' work is that the world faces a multi-dimensional crisis (economic, ecological, social, cultural and political) which is caused by the concentration of power in elites, as a result of the market economy, representative democracy and related forms of hierarchical structure. An inclusive democracy, which involves the equal distribution of power at all levels, is seen not as a utopia (in the negative sense of the word) or a "vision" but as perhaps the only way out of the present crisis, with trends towards its creation manifesting themselves today in many parts of the world. Fotopoulos is in favor ofmarket abolitionism, although he would not identify himself as a market abolitionist as such because he considers market abolition as one aspect of an inclusive democracy which refers only to theeconomic democracycomponent of it. He maintains that "modern hierarchical society," which for him includes both thecapitalistmarket economy and "socialist" statism, is highly oriented toward economic growth, which has glaring environmental contradictions. Fotopoulos proposes a model of economic democracy for a stateless, marketless and moneyless economy but he considers that the economic democracy component is equally significant to the other components of ID, i.e. political ordirect democracy, economic democracy, ecological democracy and democracy in the social realm. Fotopoulos' work has been critically assessed by important activists, theorists and scholars.[1][9][10][11][12][13][14] Overviews Selected interviews Selected talks
https://en.wikipedia.org/wiki/Inclusive_Democracy
Glitch artis an art movement centering around the practice of using digital or analog errors, more soglitches, for aesthetic purposes by either corrupting digital data or physically manipulating electronic devices. It has been also regarded as an increasing trend innew media art, with it retroactively being described as developing over the course of the 20th century onward.[1] As a technical word, aglitchis the unexpected result of a malfunction, especially occurring in software, video games, images, videos, audio, and other digital artefacts. The term came to be associated with music in the mid 90s to describe a genre of experimental electronic music,glitch music. Shortly after, asVJsand other visual artists began to embrace glitch as an aesthetic of the digital age, glitch art came to refer to a whole assembly of visual arts.[2]One such early movement was later dubbednet.art, including early work by the art collective JODI, which was started by artists Joan Heemskerk and Dirk Paesmans. JODI's experiments on glitch art included purposely causing layout errors in their website in order to display underlying code and error messages.[3]The explorations of JODI and other net.art members would later influence visual distortion practices like databending and datamoshing (see below).[3]The history of glitch art has been regarded as ranging from crafted artworks such as the filmA Colour Box(1935) byLen Lyeand thevideo sculptureTV Magnet(1965) byNam June Paik, as well asDigital TV Dinner(1978) created byJamie Fentonand Raul Zaritsky, with audio by Dick Ainsworth—made by manipulating theBally video game consoleand recording the results on videotape[4]—to more process-based contemporary work such asPanasonic TH-42PWD8UK Plasma Screen Burn(2007) byCory Arcangel.[1] Motherboard, a tech-art collective, held the first glitch art symposium inOslo, Norway during January, to "bring together international artists, academics and other Glitch practitioners for a short space of time to share their work and ideas with the public and with each other."[5][3] On September 29 thru October 3,Chicagoplayed host to the first GLI.TC/H, a five-day conference in Chicago organized by Nick Briz, Evan Meaney,Rosa Menkmanand Jon Satrom that included workshops, lectures, performances, installations and screenings.[6]In November 2011, the second GLI.TC/H event traveled fromChicagotoAmsterdamand lastly toBirmingham, UK.[7]It included workshops, screenings, lectures, performance, panel discussions and a gallery show over the course of seven days at the three cities.[8] Run Computer, Run atGLITCH 2013arts festivalat RuaRed, South Dublin Arts Centre - Dublin, curated byNora O Murchú.[9] /'fu:bar/ 2015[10] Glitch Art is Dead at Teatr Barakah in Krakow, Poland. Curated by Ras Alhague and Aleksandra Pienkosz.[11] reFrag: glitch at La Gaïté Lyrique in Paris, France. Organized by the School Art Institute of Chicago and Parsons Paris. /'fu:bar/ 2016[12] /'fu:bar/ 2017[13] Glitch Art is Dead 2 at Gamut Gallery, in Minneapolis, Minnesota, US. Curated by Miles Taylor, Ras Alhague and Aleksandra Pienkosz.[14] /'fu:bar/ 2018[15] Blue\x80 & Nuit Blanche at Villette Makerz in Paris, France. Curated by Ras Alhague and Kaspar Ravel.[16] Refrag #4 Cradle-to-Grave at Espace en cours in Paris, France. Curated by Benjamin Gaulon.[17] /'fu:bar/ 2019[18] Communication Noiseexhibition,Media Mediterranea 21 festival,Pula, Croatia.[19] /'fu:bar/ 2020[20] An Exercise of Meaning in a Glitch Seasonan exhibition in National Gallery Singapore. Curated By: Syaheedah Iskandar.[21] Posthumanism, Epidigital, and Glitch Feminisman exhibition at Machida City Museum of Graphic Arts in Japan. Curated By:Ryota Matsumoto.[22] /'fu:bar/ 2021[23] Glitch Art: Pixel Language, the first glitch art exhibition in Iran.[24] Glitch Art in Iran. La prima mostra artistica collettiva.[25] Glitch Art in Iran. La prima mostra artistica collettiva.[26] Glitch: Aesthetic of the Pixels, the second glitch video art group exhibit in Iran.[27] Glitch Art is Dead: The 3rd Expo, September 2-4 in Granite Falls, MN[28] GLITCH The Art of Interference,Pinakothek der Moderne, Munich, Germany[29] What is called "glitch art" typically means visual glitches, either in a still or moving image. It is made by either "capturing" an image of a glitch as it randomly happens, or more often by artists/designers manipulating their digital files, software or hardware to produce these "errors." Artists have posted a variety of tutorials online explaining how to make glitch art.[30][31]There are many approaches to making these glitches happen on demand, ranging from physical changes to the hardware to direct alterations of the digital files themselves. ArtistMichael Betancourtidentified five areas of manipulation that are used to create "glitchart."[32]Betancourt notes that "glitch art" is defined by a broad range of technical approaches that can be identified with changes made to the digital file, its generative display, or the technologies used to show it (such as a video screen). He includes within this range changes made to analog technologies such as television (in video art) or the physical film strip in motion pictures. Data manipulation (akadatabending) changes the information inside the digital file to create glitches. Databending involves editing and changing the file data. There are a variety of tutorials explaining how to make these changes using programs such as HexFiend.[33]Adam Woodall explains in his tutorial:[34] Like all files, image files (.jpg .bmp .gif etc) are all made up of text. Unlike some other files, like .svg (vectors) or .html (web pages), when an image is opened in a text editor all that comes up is gobbldygook! Related processes such asdatamoshingchanges the data in a video or picture file.[35][36]Datamoshing with software such asAvidemuxis a common method for creating glitch art by manipulating different frame types in compressed digital video:[37] Datamoshing involves the removal of an encoded video’s I-frames (intra-coded picture, also known as key frames—a frame that does not require any information regarding another frame to be decoded), leaving only the P- (predicted picture) or B- (bi-predictive picture) frames. P-frames contain information predicting the changes in the image between the current frame and the previous one, and B-frames contain information predicting the image differences between the previous, current and subsequent frames. Because P- and B-frames use data from previous and forward frames, they are more compressed than I-Frames. This process of direct manipulation of the digital data is not restricted to files that only appear on digital screens. "3D model glitching" refers to the purposeful corruption of the code in3D animation programsresulting in distorted and abstract images of 3Dvirtual worlds,modelsand even3D printed objects.[38] Misalignment glitches are produced by opening a digital file of one type with a program designed for a different type of file,[36]such as opening a video file as a sound file, or using the wrong codec to decompress a file. Tools commonly used to create glitches of this type includeAudacityandWordPad.[39]These glitches can depend on how Audacity handles files, even when they are not audio-encoded.[40] Hardware failure happens by altering the physical wiring or other internal connections of the machine itself, such as a short circuit, in a process called "circuit bending" causes the machine to create glitches that produce new sounds and visuals.[41]For example, by damaging internal pieces of something like aVHSplayer, one can achieve different colorful visual images. Video artistTom DeFantiexplained the role of hardware failure in a voice-over for Jamie Fenton's early glitch videoDigital TV Dinnerthat used theBally video game consolesystem:[4] This piece represents the absolute cheapest one can go in home computer art. This involves taking a $300 video game system, pounding it with your fist so the cartridge pops out while its trying to write the menu. The music here is done by Dick Ainsworth using the same system, but pounding it with your fingers instead of your fist. Physically beating the case of the game system would cause the game cartridge to pop out, interrupting the computer's operation. The glitches that resulted from this failure were a result of how the machine was set up:[4] There wasROM memoryin the cartridge and ROM memory built into the console. Popping out the cartridge while executing code in the console ROM created garbage references in the stack frames and invalid pointers, which caused the strange patterns to be drawn. ... The Bally Astrocade was unique among cartridge games in that it was designed to allow users to change game cartridges with power-on. When pressing the reset button, it was possible to remove the cartridge from the system and induce various memory dump pattern sequences. Digital TV Dinner is a collection of these curious states of silicon epilepsy set to music composed and generated upon this same platform. Misregistration is produced by the physical noise of historically analog media such as motion picture film. It includes dirt, scratches, smudges and markings that can distort physical media also impact the playback of digital recordings on media such as CDs and DVDs, as electronic music composerKim Casconeexplained in 2002:[42] "There are many types of digital audio ‘failure.' Sometimes, it results in horrible noise, while other times it can produce wondrous tapestries of sound. (To more adventurous ears, these are quite often the same.) When the German sound experimenters known as Oval started creating music in the early 1990s by painting small images on the underside of CDs to make them skip, they were using an aspect of ‘failure' in their work that revealed a subtextual layer embedded in the compact disc. Oval's investigation of ‘failure' is not new. Much work had previously been done in this area such as the optical soundtrack work of Laszlo Moholy-Nagy and Oskar Fischinger, as well as the vinyl record manipulations of John Cage and Christian Marclay, to name a few. What is new is that ideas now travel at the speed of light and can spawn entire musical genres in a relatively short period of time." Distortion was one of the earliest types of glitch art to be produced, such as in the work of video artistNam June Paik, who created video distortions by placing powerful magnets in close proximity to the television screen, resulting in the appearance of abstract patterns.[43]Paik's addition of physical interference to a TV set created new kinds of imagery that changed how the broadcast image was displayed:[44] The magnetic field interferes with the television’s electronic signals, distorting the broadcast image into an abstract form that changes when the magnet is moved. By recording the resulting analog distortions with a camera, they can then be shown without the need for the magnet. Compression artifactsis a noticeable distortion of media (includingimages,audio, andvideo) caused by the application oflossy compression. They can be intentionally used as a visual style in glitch art.Rosa Menkman's work makes use of compression artifacts,[45]particularly thediscrete cosine transformblocks (DCT blocks) found in mostdigital mediadata compressionformats such asJPEGdigital imagesandMP3digital audio.[46]Another example isJpegsby German photographerThomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style.[47][48] Media related toGlitch artat Wikimedia Commons
https://en.wikipedia.org/wiki/Glitch_art
The earlyhistory of radiois thehistory of technologythat produces and usesradio instrumentsthat useradio waves. Within thetimeline of radio, many people contributed theory and inventions in what becameradio. Radio development began as "wireless telegraphy". Later radio history increasingly involves matters ofbroadcasting. In an 1864 presentation, published in 1865,James Clerk Maxwellproposed theories ofelectromagnetismand mathematical proofs demonstrating that light, radio and x-rays were all types of electromagnetic waves propagating throughfree space.[1][2][3][4][5] Between 1886 and 1888Heinrich Rudolf Hertzpublished the results of experiments wherein he was able to transmit electromagnetic waves (radio waves) through the air, proving Maxwell's electromagnetic theory.[6][7] After their discovery many scientists and inventors experimented with transmitting and detecting "Hertzian waves" (it would take almost 20 years for the term "radio" to be universally adopted for this type of electromagnetic radiation).[8]Maxwell's theory showing that light and Hertzian electromagnetic waves were the same phenomenon at different wavelengths led "Maxwellian" scientists such as John Perry,Frederick Thomas Troutonand Alexander Trotter to assume they would be analogous to optical light.[9][10] Following Hertz' untimely death in 1894, British physicist and writerOliver Lodgepresented a widely covered lecture on Hertzian waves at theRoyal Institutionon June 1 of the same year.[11]Lodge focused on the optical qualities of the waves and demonstrated how to transmit and detect them (using an improved variation of French physicistÉdouard Branly's detector Lodge named the "coherer").[12]Lodge further expanded on Hertz' experiments showing how these new waves exhibited like lightrefraction,diffraction,polarization,interferenceandstanding waves,[13]confirming that Hertz' waves and light waves were both forms of Maxwell'selectromagnetic waves. During part of the demonstration the waves were sent from the neighboringClarendon Laboratorybuilding, and received by apparatus in the lecture theater.[14] After Lodge's demonstrations researchers pushed their experiments further down the electromagnetic spectrum towards visible light to further explore thequasiopticalnature at these wavelengths.[15]Oliver LodgeandAugusto Righiexperimented with 1.5 and 12 GHz microwaves respectively, generated by small metal ball spark resonators.[13]Russian physicistPyotr Lebedevin 1895 conducted experiments in the 50 GHz (6 millimeter) range.[13]Bengali Indian physicistJagadish Chandra Boseconducted experiments at wavelengths of 60 GHz (5 millimeter) and inventedwaveguides,horn antennas, andsemiconductorcrystal detectorsfor use in his experiments.[16]He would later write an essay, "Adrisya Alok" ("Invisible Light") on how in November 1895 he conducted a public demonstration at the Town Hall ofKolkata,Indiausing millimeter-range-wavelength microwaves to trigger detectors that ignited gunpowder and rang a bell at a distance.[17] Between 1890 and 1892 physicists such as John Perry,Frederick Thomas TroutonandWilliam Crookesproposed electromagnetic or Hertzian waves as a navigation aid or means of communication, with Crookes writing on the possibilities of wirelesstelegraphybased on Hertzian waves in 1892.[18]Among physicists, what were perceived as technical limitations to using these new waves, such as delicate equipment, the need for large amounts of power to transmit over limited ranges, and its similarity to already existent optical light transmitting devices, lead them to a belief that applications were very limited. The Serbian American engineerNikola Teslaconsidered Hertzian waves relatively useless for long range transmission since "light" could not transmit further thanline of sight.[19]There was speculation that this fog and stormy weather penetrating "invisible light" could be used in maritime applications such as lighthouses.[18]The London journalThe Electrician(December 1895) commented on Bose's achievements, saying "we may in time see the whole system of coast lighting throughout the navigable world revolutionized by an Indian Bengali scientist working single handed[ly] in our Presidency College Laboratory."[20] In 1895, adapting the techniques presented in Lodge's published lectures, Russian physicistAlexander Stepanovich Popovbuilt alightning detectorthat used a coherer based radio receiver.[21]He presented it to the Russian Physical and Chemical Society on May 7, 1895. In 1894, the young Italian inventorGuglielmo Marconibegan working on the idea of building long-distance wireless transmission systems based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing.[22]Marconi read through the literature and used the ideas of others who were experimenting with radio waves but did a great deal to develop devices such as portable transmitters and receiver systems that could work over long distances,[22]turning what was essentially a laboratory experiment into a useful communication system.[23]By August 1895, Marconi was field testing his system but even with improvements he was only able to transmit signals up to one-half mile, a distance Oliver Lodge had predicted in 1894 as the maximum transmission distance for radio waves. Marconi raised the height of his antenna and hit upon the idea of grounding his transmitter and receiver. With these improvements the system was capable of transmitting signals up to 2 miles (3.2 km) and over hills.[24]This apparatus proved to be the first engineering-complete, commercially successfulradio transmissionsystem[25][26][27]and Marconi went on to file British patent GB189612039A,Improvements in transmitting electrical impulses and signals and in apparatus there-for, in 1896. This patent was granted in the UK on 2 July 1897.[28] In 1897, Marconi established a radio station on theIsle of Wight, England and opened his "wireless" factory in the formersilk-works at Hall Street,Chelmsford, England, in 1898, employing around 60 people. On 12 December 1901, using a 500-foot (150 m) kite-supported antenna for reception—signals transmitted by the company's new high-power station atPoldhu, Cornwall, Marconi transmitted a message across the Atlantic Ocean toSignal HillinSt. John's,Newfoundland.[29][30][31][32] Marconi began to build high-powered stations on both sides of the Atlantic to communicate with ships at sea. In 1904, he established a commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907[33][34]betweenClifden, Ireland, andGlace Bay, but even after this the company struggled for many years to provide reliable communication to others. Marconi's apparatus is also credited with saving the 700 people who survived the tragicTitanicdisaster.[35] In the late 1890s, Canadian-American inventorReginald Fessendencame to the conclusion that he could develop a far more efficient system than the spark-gap transmitter and coherer receiver combination.[36][37]To this end he worked on developing a high-speed alternator (referred to as "an alternating-current dynamo") that generated "pure sine waves" and produced "a continuous train of radiant waves of substantially uniform strength", or, in modern terminology, acontinuous-wave(CW) transmitter.[38]While working for theUnited States Weather BureauonCobb Island, Maryland, Fessenden researched using this setup for audio transmissions via radio. By fall of 1900, he successfully transmitted speech over a distance of about 1.6 kilometers (one mile),[39]which appears to have been the first successful audio transmission using radio signals.[40][41]Although successful, the sound transmitted was far too distorted to be commercially practical.[42]According to some sources, notably Fessenden's wife Helen's biography, onChristmas Eve1906,Reginald Fessendenused anAlexanderson alternatorand rotaryspark-gap transmitterto make the first radio audio broadcast, fromBrant Rock, Massachusetts. Ships at sea heard a broadcast that included Fessenden playingO Holy Nighton theviolinand reading a passage from theBible.[43][44] Around the same time American inventorLee de Forestexperimented with anarc transmitter, which unlike the discontinuous pulses produced by spark transmitters, created steady "continuous wave" signal that could be used foramplitude modulated(AM) audio transmissions. In February 1907 he transmitted electronictelharmoniummusic from his laboratory station in New York City.[45]This was followed by tests that included, in the fall,Eugenia Farrarsinging "I Love You Truly".[46]In July 1907 he made ship-to-shore transmissions by radiotelephone—race reports for the Annual Inter-Lakes Yachting Association (I-LYA) Regatta held onLake Erie—which were sent from the steam yachtThelmato his assistant, Frank E. Butler, located in the Fox's Dock Pavilion onSouth Bass Island.[47] The Dutch companyNederlandsche Radio-Industrieand its owner-engineer,Hanso Idzerda, made its first regular entertainment radio broadcast over stationPCGGfrom its workshop inThe Hagueon 6 November 1919. The company manufactured both transmitters and receivers. Its popular program was broadcast four nights per week using narrow-band FM transmissions on 670 metres (448 kHz),[48]until 1924 when the company ran into financial trouble. Regular entertainment broadcasts began inArgentina, pioneered byEnrique Telémaco Susiniand his associates. At 9 pm on August 27, 1920, Sociedad Radio Argentina aired a live performance of Richard Wagner's operaParsifalfrom the Coliseo Theater in downtownBuenos Aires. Only about twenty homes in the city had receivers to tune in this program. On 31 August 1920 theDetroit Newsbegan publicized daily news and entertainment "Detroit News Radiophone" broadcasts, originally as licensed amateur station 8MK, then later as WBL andWWJinDetroit, Michigan. Union College in Schenectady,New Yorkbegan broadcasting on October 14, 1920, over2ADD, an amateur station licensed to Wendell King, anAfrican-Americanstudent at the school.[49]Broadcasts included a series of Thursday night concerts initially heard within a 100-mile (160 km) radius and later for a 1,000-mile (1,600 km) radius.[49][50] In 1922 regular audio broadcasts for entertainment began in the UK from theMarconiResearch Centre2MTatWrittlenearChelmsford, England. In early radio, and to a limited extent much later, the transmission signal of the radio station was specified in meters, referring to thewavelength, the length of the radio wave. This is the origin of the termslong wave,medium wave, andshort waveradio.[51]Portions of the radio spectrum reserved for specific purposes were often referred to by wavelength: the40-meter band, used foramateur radio, for example. The relation between wavelength and frequency is reciprocal: the higher the frequency, the shorter the wave, and vice versa. As equipment progressed, precise frequency control became possible; early stations often did not have a precise frequency, as it was affected by the temperature of the equipment, among other factors. Identifying a radio signal by its frequency rather than its length proved much more practical and useful, and starting in the 1920s this became the usual method of identifying a signal, especially in the United States. Frequencies specified in number of cycles per second (kilocycles, megacycles) were replaced by the more specific designation ofhertz(cycles per second) about 1965. Using variouspatents, theBritish Marconicompany was established in 1897 by Guglielmo Marconi and began communication betweencoast radio stationsand ships at sea.[52]A year after, in 1898, they successfully introduced their first radio station in Chelmsford. This company, along with its subsidiariesCanadian MarconiandAmerican Marconi, had a stranglehold on ship-to-shore communication. It operated much the wayAmerican Telephone and Telegraphoperated until 1983, owning all of its equipment and refusing to communicate with non-Marconi equipped ships. Many inventions improved the quality of radio, and amateurs experimented with uses of radio, thus planting the first seeds of broadcasting. The companyTelefunkenwas founded on May 27, 1903, as "Telefunken society for wireless telefon" ofSiemens & Halske(S & H) and theAllgemeine Elektrizitäts-Gesellschaft (General Electricity Company)as joint undertakings for radio engineering in Berlin.[53]It continued as a joint venture ofAEGandSiemens AG, until Siemens left in 1941. In 1911,Kaiser Wilhelm IIsent Telefunken engineers toWest Sayville,New Yorkto erect three 600-foot (180-m) radio towers there. Nikola Tesla assisted in the construction. A similar station was erected inNauen, creating the only wireless communication between North America and Europe. The invention of amplitude-modulated (AM) radio, which allows more closely spaced stations to simultaneously send signals (as opposed to spark-gap radio, where each transmission occupies a wide bandwidth) is attributed toReginald Fessenden,Valdemar PoulsenandLee de Forest. The most common type of receiver before vacuum tubes was thecrystal set, although some early radios used some type of amplification through electric current or battery. Inventions of thetriode amplifier,motor-generator, anddetectorenabled audio radio. The use ofamplitude modulation(AM), by which soundwaves can be transmitted over a continuous-wave radio signal of narrow bandwidth (as opposed to spark-gap radio, which sent rapid strings of damped-wave pulses that consumed much bandwidth and were only suitable for Morse-code telegraphy) was pioneered by Fessenden, Poulsen and Lee de Forest.[54] The art and science of crystal sets is still pursued as a hobby in the form of simple un-amplified radios that 'runs on nothing, forever'. They are used as a teaching tool by groups such as theBoy Scouts of Americato introduce youngsters to electronics and radio. As the only energy available is that gathered by the antenna system, loudness is necessarily limited. During the mid-1920s, amplifyingvacuum tubesrevolutionizedradio receiversandtransmitters.John Ambrose Flemingdeveloped a vacuum tubediode.Lee de Forestplaced a screen, added a"grid" electrode, creating thetriode.[55] Early radios ran the entire power of the transmitter through acarbon microphone. In the 1920s, theWestinghouse companybought Lee de Forest's andEdwin Armstrong's patent. During the mid-1920s, Amplifyingvacuum tubesrevolutionizedradio receiversand transmitters. Westinghouse engineers developed a more modern vacuum tube. The first radios still required batteries, but in 1926 the "battery eliminator" was introduced to the market. This tube technology allowed radios to be powered through the grid instead. They still required batteries to heat up the vacuum-tube filaments, but after the invention ofindirectly heated vacuum tubes, the first completely battery free radios became available in 1927.[56] In 1929 a new screen grid tube called UY-224 was introduced, an amplifier designed to operate directly on alternating current.[57] A problem with the early radios was fading stations and fluctuating volume. The invention of thesuperheterodyne receiversolved this problem, and the first radios with a heterodyne radio receiver went for sale in 1924. But it was costly, and the technology was shelved while waiting for the technology to mature, and in 1929 the Radiola 66 and Radiola 67 went for sale.[58][59][60] In the early days one had to use headphones to listen to radio. Later loudspeakers in the form of a horn of the type used by phonographs, equipped with a telephone receiver, became available. But the sound quality was poor. In 1926 the first radios with electrodynamic loudspeakers went for sale, which improved the quality significantly. At first the loudspeakers were separated from the radio, but soon radios would come with a built-in loudspeaker.[61] Other inventions related to sound included the automatic volume control (AVC), first commercially available in 1928.[62]In 1930 a tone control knob was added to the radios. This allowed listeners to improve imperfect broadcasting.[63] Themagnetic cartridge, which was introduced in the mid 20's, greatly improved the broadcasting of music. When playing music from a phonograph before the magnetic cartridge, a microphone had to be placed close to a horn loudspeaker. The invention allowed the electric signals to be amplified and then fed directly to thebroadcast transmitter.[64] Following development oftransistortechnology,bipolar junction transistorsled to the development of thetransistor radio. In 1954, the Regency company introduced a pocket transistor radio, theTR-1, powered by a "standard 22.5 V Battery." In 1955, the newly formedSonycompany introduced its first transistorized radio, theTR-55.[65]It was small enough to fit in avestpocket, powered by a small battery. It was durable, because it had no vacuum tubes to burn out. In 1957, Sony introduced the TR-63, the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios.[66]Over the next 20 years, transistors replaced tubes almost completely except for high-powertransmitters. By the mid-1960s, theRadio Corporation of America(RCA) were usingmetal–oxide–semiconductor field-effect transistors(MOSFETs) in their consumer products, includingFM radio, television andamplifiers.[67]Metal–oxide–semiconductor(MOS)large-scale integration(LSI) provided a practical and economic solution for radio technology, and was used inmobile radiosystems by the early 1970s.[68] The first integrated circuit (IC) radio, P1740 byGeneral Electric, became available in 1966.[69] The first car radio was introduced in 1922, but it was so large that it took up too much space in the car.[70]The first commercial car radio that could easily be installed in most cars went for sale in 1930.[71][72] Telegraphydid not go away on radio. Instead, the degree of automation increased. On land-lines in the 1930s,teletypewritersautomated encoding, and were adapted to pulse-code dialing to automate routing, a service calledtelex. For thirty years, telex was the cheapest form of long-distance communication, because up to 25 telex channels could occupy the same bandwidth as one voice channel. For business and government, it was an advantage that telex directly produced written documents. Telex systems were adapted to short-wave radio by sending tones oversingle sideband.CCITTR.44 (the most advanced pure-telex standard) incorporated character-level error detection and retransmission as well as automated encoding and routing. For many years, telex-on-radio (TOR) was the only reliable way to reach some third-world countries. TOR remains reliable, though less-expensive forms of e-mail are displacing it. Many national telecom companies historically ran nearly pure telex networks for their governments, and they ran many of these links over short wave radio. Documents including maps and photographs went byradiofax, or wireless photoradiogram, invented in 1924 byRichard H. RangerofRadio Corporation of America(RCA). This method prospered in the mid-20th century and faded late in the century. One of the first developments in the early 20th century was that aircraft used commercial AM radio stations for navigation, AM stations are still marked on U.S. aviation charts.Radio navigationplayed an important role during war time, especially in World War II. Before the discovery of the crystal oscillator, radio navigation had many limits.[73]However, as radio technology expanding, navigation is easier to use, and it provides a better position. Although there are many advantages, the radio navigation systems often comes with complex equipment such as the radio compass receiver, compass indicator, or the radar plan position indicator. All of these require users to obtain certain knowledge. In the 1960sVORsystems became widespread. In the 1970s,LORANbecame the premier radio navigation system. Soon, the US Navy experimented withsatellite navigation. In 1987, theGlobal Positioning System(GPS) constellation ofsatelliteswas launched; it was followed by otherGNSSsystems likeGlonass,BeiDouandGalileo. In 1933,FM radiowas patented by inventorEdwin H. Armstrong.[74]FM usesfrequency modulationof the radio wave to reducestaticandinterferencefrom electrical equipment and the atmosphere. In 1937,W1XOJ, the first experimental FM radio station after Armstrong'sW2XMNin Alpine, New Jersey, was granted a construction permit by the USFederal Communications Commission(FCC). After World War II,FM radiobroadcasting was introduced in Germany. At a meeting inCopenhagenin 1948, a newwavelength planwas set up for Europe. Because of the recent war, Germany (which did not exist as a state and so was not invited) was only given a small number ofmedium-wavefrequencies, which were not very good for broadcasting. For this reason Germany began broadcasting on UKW ("Ultrakurzwelle", i.e. ultra short wave, nowadays calledVHF) which was not covered by the Copenhagen plan. After someamplitude modulationexperience with VHF, it was realized that FM radio was a much better alternative for VHF radio than AM. Because of this history, FM radio is still referred to as "UKW Radio" in Germany. Other European nations followed a bit later, when the superior sound quality of FM and the ability to run many more local stations because of the more limited range of VHF broadcasts were realized. In the 1930s, regularanalog televisionbroadcasting began in some parts of Europe and North America. By the end of the decade there were roughly 25,000 all-electronic television receivers in existence worldwide, the majority of them in the UK. In the US, Armstrong's FM system was designated by the FCC to transmit and receive television sound. By 1963,color televisionwas being broadcast commercially (though not all broadcasts or programs were in color), and the first (radio)communication satellite,Telstar, was launched. In the 1970s, In 1947 AT&T commercialized theMobile Telephone Service. From its start in St. Louis in 1946, AT&T then introduced Mobile Telephone Service to one hundred towns and highway corridors by 1948. Mobile Telephone Service was a rarity with only 5,000 customers placing about 30,000 calls each week. Because only three radio channels were available, only three customers in any given city could make mobile telephone calls at one time.[76]Mobile Telephone Service was expensive, costing US$15 per month, plus $0.30–0.40 per local call, equivalent to (in 2012 US dollars) about $176 per month and $3.50–4.75 per call.[77]TheAdvanced Mobile Phone Systemanalog mobile phone system, developed byBell Labs, was introduced in the Americas in 1978,[78][79][80]gave much more capacity. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. The development ofmetal–oxide–semiconductor(MOS)large-scale integration(LSI) technology,information theoryandcellular networkingled to the development of affordablemobile communications.[81]TheAdvanced Mobile Phone Systemanalog mobile phone system, developed byBell Labsand introduced in theAmericasin 1978,[78][79][80]gave much more capacity. It was the primary analog mobile phone system inNorth America(and other locales) through the 1980s and into the 2000s. The British government and the state-owned postal services found themselves under massive pressure from the wireless industry (including telegraphy) and early radio adopters to open up to the new medium. In an internal confidential report from February 25, 1924, theImperial Wireless Telegraphy Committeestated: When radio was introduced in the early 1920s, many predicted it would kill thephonograph recordindustry. Radio was a free medium for the public to hear music for which they would normally pay. While some companies saw radio as a new avenue for promotion, others feared it would cut into profits from record sales and live performances. Many record companies would not license their records to be played over the radio, and had their major stars sign agreements that they would not perform on radio broadcasts.[83][84] Indeed, the music recording industry had a severe drop in profits after the introduction of the radio. For a while, it appeared as though radio was a definite threat to the record industry. Radio ownership grew from two out of five homes in 1931 to four out of five homes in 1938. Meanwhile, record sales fell from $75 million in 1929 to $26 million in 1938 (with a low point of $5 million in 1933), though the economics of the situation were also affected by theGreat Depression.[85] The copyright owners were concerned that they would see no gain from the popularity of radio and the 'free' music it provided. What they needed to make this new medium work for them already existed in previous copyright law. The copyright holder for a song had control over all public performances 'for profit.' The problem now was proving that the radio industry, which was just figuring out for itself how to make money from advertising and currently offered free music to anyone with a receiver, was making a profit from the songs. Thetest casewas againstBamberger'sDepartment Store inNewark, New Jerseyin 1922. The store was broadcasting music from its store on the radio station WOR. No advertisements were heard, except at the beginning of the broadcast which announced "L. Bamberger and Co., One of America's Great Stores, Newark, New Jersey." It was determined through this and previous cases (such as the lawsuit against Shanley's Restaurant) that Bamberger was using the songs for commercial gain, thus making it a public performance for profit, which meant the copyright owners were due payment. With this ruling theAmerican Society of Composers, Authors and Publishers(ASCAP) began collecting licensing fees from radio stations in 1923. The beginning sum was $250 for all music protected under ASCAP, but for larger stations the price soon ballooned to $5,000. Edward Samuels reports in his bookThe Illustrated Story of Copyrightthat "radio and TV licensing represents the single greatest source of revenue for ASCAP and its composers […] and [a]n average member of ASCAP gets about $150–$200 per work per year, or about $5,000-$6,000 for all of a member's compositions." Not long after the Bamberger ruling, ASCAP had to once again defend their right to charge fees, in 1924. The Dill Radio Bill would have allowed radio stations to play music without paying and licensing fees to ASCAP or any other music-licensing corporations. The bill did not pass.[86] Radio technology was first used for ships to communicate at sea. To ensure safety, theWireless Ship Act of 1910marks the first time the U.S. government implies regulations on radio systems on ships.[87]This act requires ships to have a radio system with a professional operator if they want to travel more than 200 miles offshore or have more than 50 people on board. However, this act had many flaws including the competition ofradio operatorsincluding the two majors company (British and American Marconi). They tended to delay communication for ships that used their competitor's system. This contributed to the tragic incident of the sinking of theTitanicin 1912. In 1912, distress calls to aid the sinkingTitanicwere met with a large amount of interfering radio traffic, severely hampering the rescue effort. Subsequently, the US government passed theRadio Act of 1912to help mitigate the repeat of such a tragedy. The act helps distinguish between normal radio traffic and (primarily maritime) emergency communication, and specifies the role of government during such an emergency.[88] TheRadio Act of 1927gave theFederal Radio Commissionthe power to grant and deny licenses, and to assign frequencies and power levels for each licensee. In 1928 it began requiring licenses of existing stations and setting controls on who could broadcast from where on what frequency and at what power. Some stations could not obtain a license and ceased operations. In section 29, the Radio Act of 1927 mentioned that the content of the broadcast should be freely present, and the government cannot interfere with this.[89] The introduction of theCommunications Act of 1934led to the establishment of the Federal Communications Commissions (FCC). The FCC's responsibility is to control the industry including "telephone, telegraph, and radio communications."[90]Under this Act, all carriers have to keep records of authorized interference and unauthorized interference. This Act also supports the President in time of war. If the government needs to use the communication facilities in time of war, they are allowed to. TheTelecommunications Act of 1996was the first significant overhaul in over 60 years amending the work of the Communications Act of 1934. Coming only two dozen years after the breakup of AT&T, the act sets out to move telecommunications into a state of competition with their markets and the networks they are a part of.[91]Up to this point the effects of the Telecommunications Act of 1996 have been seen, but some of the changes the Act set out to fix are still ongoing problems, such as being unable to create an open competitive market. The question of the 'first' publicly targeted licensed radio station in the U.S. has more than one answer and depends on semantics. Settlement of this 'first' question may hang largely upon what constitutes 'regular' programming
https://en.wikipedia.org/wiki/History_of_radio
In mathematics, auniqueness theorem, also called aunicity theorem, is atheoremasserting the uniqueness of an object satisfying certain conditions, or the equivalence of all objects satisfying the said conditions.[1]Examples of uniqueness theorems include: The worduniqueis sometimes replaced byessentially unique, whenever one wants to stress that the uniqueness is only referred to the underlying structure, whereas the form may vary in all ways that do not affect the mathematical content.[1] A uniqueness theorem (or its proof) is, at least within the mathematics of differential equations, often combined with an existence theorem (or its proof) to a combined existence and uniqueness theorem (e.g., existence and uniqueness of solution to first-order differential equations with boundary condition).[3]
https://en.wikipedia.org/wiki/Uniqueness_theorem
Empiricalmethods Prescriptiveand policy Convexityis a geometric property with a variety of applications ineconomics.[1]Informally, an economic phenomenon is convex when "intermediates (or combinations) are better than extremes". For example, an economic agent withconvex preferencespreferscombinationsof goods over having a lot of anyonesort of good; this represents a kind ofdiminishing marginal utilityof having more of the same good. Convexity is a key simplifying assumption in many economic models, as it leads to market behavior that is easy to understand and which has desirable properties. For example, theArrow–Debreu modelofgeneral economic equilibriumposits that if preferences are convex and there is perfect competition, thenaggregate supplieswill equalaggregate demandsfor every commodity in the economy. In contrast,non-convexityis associated withmarket failures, wheresupply and demanddiffer or wheremarket equilibriacan beinefficient. The branch of mathematics which supplies the tools for convex functions and their properties is calledconvex analysis; non-convex phenomena are studied undernonsmooth analysis. The economics depends upon the following definitions and results fromconvex geometry. Arealvector spaceof twodimensionsmay be given aCartesian coordinate systemin which every point is identified by a list of two real numbers, called "coordinates", which are conventionally denoted byxandy. Two points in the Cartesian plane can beaddedcoordinate-wise further, a point can bemultipliedby each real numberλcoordinate-wise More generally, any real vector space of (finite) dimensionDcan be viewed as thesetof all possible lists ofDreal numbers{ (v1,v2, . . . ,vD)} together with twooperations:vector additionandmultiplication by a real number. For finite-dimensional vector spaces, the operations of vector addition and real-number multiplication can each be defined coordinate-wise, following the example of the Cartesian plane. In a real vector space, a set is defined to beconvexif, for each pair of its points, every point on theline segmentthat joins them iscoveredby the set. For example, a solidcubeis convex; however, anything that is hollow or dented, for example, acrescentshape, is non‑convex.Trivially, theempty setis convex. More formally, a setQis convex if, for all pointsv0andv1inQand for every real numberλin theunit interval[0,1], the point is amemberofQ. Bymathematical induction, a setQis convex if and only if everyconvex combinationof members ofQalso belongs toQ. By definition, aconvex combinationof an indexed subset {v0,v1, . . . ,vD} of a vector space is anyweighted averageλ0v0+λ1v1+ . . . +λDvD,for some indexed set of non‑negative real numbers {λd} satisfying theequationλ0+λ1+ . . . +λD= 1. The definition of a convex set implies that theintersectionof two convex sets is a convex set. More generally, the intersection of a family of convex sets is a convex set. For every subsetQof a real vector space, itsconvex hullConv(Q)is theminimalconvex set that containsQ. Thus Conv(Q) is the intersection of all the convex sets thatcoverQ. The convex hull of a set can be equivalently defined to be the set of all convex combinations of points inQ. Supporting hyperplaneis a concept ingeometry. Ahyperplanedivides a space into twohalf-spaces. A hyperplane is said tosupportasetS{\displaystyle S}in therealn-spaceRn{\displaystyle \mathbb {R} ^{n}}if it meets both of the following: Here, a closed half-space is the half-space that includes the hyperplane. Thistheoremstates that ifS{\displaystyle S}is a closedconvex setinRn,{\displaystyle \mathbb {R} ^{n},}andx{\displaystyle x}is a point on theboundaryofS,{\displaystyle S,}then there exists a supporting hyperplane containingx.{\displaystyle x.} The hyperplane in the theorem may not be unique, as noticed in the second picture on the right. If the closed setS{\displaystyle S}is not convex, the statement of the theorem is not true at all points on the boundary ofS,{\displaystyle S,}as illustrated in the third picture on the right. An optimal basket of goods occurs where the consumer's convexpreference setissupportedby the budget constraint, as shown in the diagram. If the preference set is convex, then the consumer's set of optimal decisions is a convex set, for example, a unique optimal basket (or even a line segment of optimal baskets). For simplicity, we shall assume that the preferences of a consumer can be described by autility functionthat is acontinuous function, which implies that thepreference setsareclosed. (The meanings of "closed set" is explained below, in the subsection on optimization applications.) If a preference set is non‑convex, then some prices produce a budget supporting two different optimal consumption decisions. For example, we can imagine that, for zoos, a lion costs as much as an eagle, and further that a zoo's budget suffices for one eagle or one lion. We can suppose also that a zoo-keeper views either animal as equally valuable. In this case, the zoo would purchase either one lion or one eagle. Of course, a contemporary zoo-keeper does not want to purchase a half an eagle and ahalf a lion(or agriffin)! Thus, the contemporary zoo-keeper's preferences are non‑convex: The zoo-keeper prefers having either animal to having any strictly convex combination of both. Non‑convex sets have been incorporated in the theories of general economic equilibria,[2]ofmarket failures,[3]and ofpublic economics.[4]These results are described in graduate-level textbooks inmicroeconomics,[5]general equilibrium theory,[6]game theory,[7]mathematical economics,[8]and applied mathematics (for economists).[9]TheShapley–Folkman lemmaresults establish that non‑convexities are compatible with approximate equilibria in markets with many consumers; these results also apply toproduction economieswith many smallfirms.[10] In "oligopolies" (markets dominated by a few producers), especially in "monopolies" (markets dominated by one producer), non‑convexities remain important.[11]Concerns with large producers exploiting market power in fact initiated the literature on non‑convex sets, whenPiero Sraffawrote about on firms with increasingreturns to scalein 1926,[12]after whichHarold Hotellingwrote aboutmarginal cost pricingin 1938.[13]Both Sraffa and Hotelling illuminated themarket powerof producers without competitors, clearly stimulating a literature on the supply-side of the economy.[14]Non‑convex sets arise also withenvironmental goods(and otherexternalities),[15][16]withinformation economics,[17]and withstock markets[11](and otherincomplete markets).[18][19]Such applications continued to motivate economists to study non‑convex sets.[20] Economists have increasingly studied non‑convex sets withnonsmooth analysis, which generalizesconvex analysis. "Non‑convexities in [both] production and consumption ... required mathematical tools that went beyond convexity, and further development had to await the invention of non‑smooth calculus" (for example, Francis Clarke'slocally Lipschitzcalculus), as described byRockafellar & Wets (1998)[21]andMordukhovich (2006),[22]according toKhan (2008).[23]Brown (1991, pp. 1967–1968) wrote that the "major methodological innovation in the general equilibrium analysis of firms with pricing rules" was "the introduction of the methods of non‑smooth analysis, as a [synthesis] of global analysis (differential topology) and [of] convex analysis." According toBrown (1991, p. 1966), "Non‑smooth analysis extends the local approximation of manifolds by tangent planes [and extends] the analogous approximation of convex sets by tangent cones to sets" that can be non‑smooth or non‑convex.[24]Economists have also usedalgebraic topology.[25]
https://en.wikipedia.org/wiki/Convexity_in_economics
PyTorchis amachine learninglibrarybased on theTorchlibrary,[4][5][6]used for applications such ascomputer visionandnatural language processing,[7]originally developed byMeta AIand now part of theLinux Foundationumbrella.[8][9][10][11]It is one of the most populardeep learningframeworks, alongside others such asTensorFlow,[12]offeringfree and open-source softwarereleased under themodified BSD license. Although thePythoninterface is more polished and the primary focus of development, PyTorch also has aC++interface.[13] A number of pieces ofdeep learningsoftware are built on top of PyTorch, includingTesla Autopilot,[14]Uber's Pyro,[15]Hugging Face's Transformers,[16][17]and Catalyst.[18][19] PyTorch provides two high-level features:[20] Meta (formerly known as Facebook) operates both PyTorch and Convolutional Architecture for Fast Feature Embedding (Caffe2), but models defined by the two frameworks were mutually incompatible. The Open Neural Network Exchange (ONNX) project was created by Meta andMicrosoftin September 2017 for converting models between frameworks. Caffe2 was merged into PyTorch at the end of March 2018.[21]In September 2022, Meta announced that PyTorch would be governed by the independent PyTorch Foundation, a newly created subsidiary of theLinux Foundation.[22] PyTorch 2.0 was released on 15 March 2023, introducingTorchDynamo, a Python-levelcompilerthat makes code run up to 2x faster, along with significant improvements in training and inference performance across majorcloud platforms.[23][24] PyTorch defines a class called Tensor (torch.Tensor) to store and operate on homogeneous multidimensional rectangular arrays of numbers. PyTorch Tensors are similar toNumPyArrays, but can also be operated on aCUDA-capableNVIDIAGPU. PyTorch has also been developing support for other GPU platforms, for example, AMD'sROCm[25]and Apple'sMetal Framework.[26] PyTorch supports various sub-types of Tensors.[27] Note that the term "tensor" here does not carry the same meaning as tensor in mathematics or physics. The meaning of the word in machine learning is only superficially related to its original meaning as a certain kind of object inlinear algebra. Tensors in PyTorch are simply multi-dimensional arrays. PyTorch defines a module called nn (torch.nn) to describe neural networks and to support training. This module offers a comprehensive collection of building blocks for neural networks, including various layers and activation functions, enabling the construction of complex models. Networks are built by inheriting from thetorch.nnmodule and defining the sequence of operations in theforward()function. The following program shows the low-level functionality of the library with a simple example. The following code-block defines a neural network with linear layers using thennmodule.
https://en.wikipedia.org/wiki/PyTorch